Tech giants forced to reveal AI secrets – here’s how this could make life better for all
PopTika/Shutterstock
The European Commission is forcing 19 tech giants including Amazon, Google, TikTok and YouTube to explain their artificial intelligence (AI) algorithms under the Digital Services Act. Asking these businesses – platforms and search engines with more than 45 million EU users – for this information is a much-needed step towards making AI more transparent and accountable. This will make life better for everyone.
AI is expected to affect every aspect of our lives – from healthcare, to education, to what we look at and listen to, and even how how well we write. But AI also generates a lot of fear, often revolving around a god-like computer becoming smarter than us, or the risk that a machine tasked with an innocuous task may inadvertently destroy humanity. More pragmatically, people often wonder if AI will make them redundant.
We have been there before: machines and robots have already replaced many factory workers and bank clerks without leading to the end of work. But AI-based productivity gains come with two novel problems: transparency and accountability. And everyone will lose if we don’t think seriously about the best way to address these problems.
Of course, by now we are used to being evaluated by algorithms. Banks use software to check our credit scores before offering us a mortgage, and so do insurance or mobile phone companies. Ride-sharing apps make sure we are pleasant enough before offering us a drive. These evaluations use a limited amount of information, selected by humans: your credit rating depends on your payments history, your Uber rating depends on how previous drivers felt about you.
Black box ratings
But new AI-based technologies gather and organise data unsupervised by humans. This means that it is much more complicated to make somebody accountable or indeed to understand what factors were used to arrive at a machine-made rating or decision.
What if you begin to find that no one is calling you back when you apply for a job, or that you are not allowed to borrow money? This could be because of some error about you somewhere on the internet.
In Europe, you have the right to be forgotten and to ask online platforms to remove inaccurate information about you. But it will be hard to find out what the incorrect information is if it comes from an unsupervised algorithm. Most likely, no human will know the exact answer.
If errors are bad, accuracy can be even worse. What would happen for instance if you let an algorithm look at all the data available about you and evaluate your ability to repay a credit?
A high-performance algorithm could infer that, all else being equal, a woman, a member of an ethnic group that tends to be discriminated against, a resident of a poor neighbourhood, somebody that speaks with a foreign accent or who isn’t “good looking”, is less creditworthy.
Research shows that these types of people can expect to earn less than others and are therefore less likely to repay their credit – algorithms will also “know” this. While there are rules to stop people at banks from discriminating against potential borrowers, an algorithm acting alone could deem it accurate to charge these people more to borrow money. Such statistical discrimination could create a vicious circle: if you must pay more to borrow, you may struggle to make these higher repayments.
Even if you ban the algorithm from using data about protected characteristics, it could reach similar conclusions based on what you buy, the movies you watch, the books you read, or even the way you write and the jokes that make you laugh. Yet algorithms are already being used to screen job applications, evaluate students and help the police.
The cost of accuracy
Besides fairness considerations, statistical discrimination can hurt everyone. A study of French supermarkets has shown, for instance, that when employees with a Muslim-sounding name work under the supervision of a prejudiced manager, the employee is less productive because the supervisor’s prejudice becomes a self-fulfilling prophecy.
Research on Italian schools shows that gender stereotypes affect achievement. When a teacher believes girls to be weaker than boys in maths and stronger in literature, students organise their effort accordingly and the teacher is proven right. Some girls who could have been great mathematicians or boys who could have been amazing writers may end up choosing the wrong career as a result.
When people are involved in decision making, we can measure and, to a certain extent, correct prejudice. But it’s impossible to make unsupervised algorithms accountable if we do not know the exact information they use to make their decisions.
Ground Picture/Shutterstock
If AI is to really improve our lives, therefore, transparency and accountability will be key – ideally, before algorithms are even introduced to a decision-making process. This is the goal of the EU Artificial Intelligence Act. And so, as is often the case, EU rules could quickly become the global standard. This is why companies should share commercial information with regulators before using them for sensitive practices such as hiring.
Of course, this kind of regulation involves striking a balance. The major tech companies see AI as the next big thing, and innovation in this area is also now a geopolitical race. But innovation often only happens when companies can keep some of their technology secret, and so there is always the risk that too much regulation will stifle progress.
Some believe the absence of the EU from major AI innovation is a direct consequence of its strict data protection laws. But unless we make companies accountable for the outcomes of their algorithms, many of the possible economic benefits from AI development could backfire anyway.
Renaud Foucart does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.