AI: how it hands power to machines to transform the way we view the world

AI: how it hands power to machines to transform the way we view the world

cono0430 / Shutterstock

There are signs of AI everywhere, it’s behind everything from customer service chatbots to the personalised ads we receive when browsing online. However, we remain largely unaware of the hidden algorithms doing the heavy legwork behind the scenes.

We are currently working on a research project focusing on conversations with specialists within the field of AI. We are questioning them about their thinking and values, as well as what ethical considerations they consider most important – and why. The norms and values of developers can become embedded in the AI systems they engineer. However, they – and we – are often unaware of this, along with its consequences.

It’s vital to understand as much as possible about AI’s development because the technology is already changing us in ways we don’t seem to realise. For example, research published in 2017 showed that social media algorithms create outcomes based on assumptions about the users, but users also adapt to these outcomes, such as which stories show in their feeds, also changing the logic of the algorithm. Our daily interactions with AI are making us increasingly reliant on it, but the power dynamic in this relationship greatly favours the AI systems. This is a technology whose inner workings aren’t even fully understood by its creators.

Too heavy a human reliance on technology can reduce creative and critical thinking. AI has already led to job displacements and unemployment. And, while the warnings that it could lead to human extinction shouldn’t be taken at face value, we can’t afford to completely dismiss them either.

Algorithms have been shown to contain discriminatory tendencies towards race, gender and other protected characteristics. We need to understand how these and other problems with AI development arise.

Some commentators have drawn attention to what they say is a failure to consider security and privacy by the companies developing AI. There is also a lack of transparency and accountability regarding AI projects. While this is not unusual in the competitive world of big tech, we surely need to adopt a more rational approach with technology that’s capable of exerting such power over our lives.

See also  Why do people hate people?


Read more:
EU approves draft law to regulate AI – here’s how it will work

Identity crisis?

What has been neglected in the discourse about AI is how our sense of meaning, identity and reality will increasingly rely on engaging with the services it facilitates. AI may not have consciousness, but it exercises power in ways that affect our sense of who we are. This is because we freely identify with – and participate in – the pursuits enabled by its presence.

In this sense, AI is not some great conspiracy designed to control the world and all its inhabitants but more like a force, neither necessarily good nor bad. However, while extinction is unlikely in the near term, a much more present danger is that our reliance on the technology leads to humans effectively serving the technology. This is not a situation any of us would want, even less so when the technology incorporates human norms many would consider to be less than ideal.

ChatGPT


Ascannio / Shutterstock

For an example of what we’re talking about here, let’s take the performance guidelines for, and monitoring of, delivery drivers, which is facilitated by automated systems with AI. They have been described by a UK all-party parliamentary group as negatively affecting the mental and physical wellbeing of workers as “they experience the extreme pressure of constant, real-time micro-management and automated assessment”.

Another example was highlighted by Erik Brynjolfsson, a Stanford economist who has raised the possibility of something called the “Turing trap”. This refers to concerns that the automation of human activities could leave wealth and power in fewer and fewer hands. In his book The Turing Trap: The Promise & Peril of Human-Like Artificial Intelligence, Brynjolfsson writes: “With that concentration (of power) comes the peril of being trapped in an equilibrium in which those without power have no way to improve their outcomes.”

See also  The 'Optus hacker' claims they've deleted the data. Here's what experts want you to know

More recently, Jeremy Howard, an AI researcher, described how he introduced ChatGPT to his seven-year-old daughter after she asked several questions about it. He concluded that it could become a new kind of personal tutor, teaching her maths, science, English and other subjects. Clearly, this would involve a displacement of the role of teachers. However, Howard also warned his daughter that she should not believe everything it said. This aspect poses a real risk for learning. And even if ChatGPT was conveying accurate knowledge, would his daughter retain this information as readily as when it was communicated through “embodied” speech – in other words, by a human?

What the algorithm sees

These real world cases demonstrate the way that AI can transform the way we view the world and ourselves. They indicate that there is a power dynamic between users and AI where the machine exercises authority over those who interact with it.

As Taina Bucher, assistant professor in communication and IT at the University of Copenhagen, reported in a 2016 research paper carried out with the help of consumers: “It is not just that the categories and classifications that algorithms rely on match our own sense of self, but to what extent we come to see ourselves through the ‘eyes’ of the algorithm.”

AI is often simply accessed through our computer screens or other more abstract mediums, it is not embodied except in the most limited sense. As such, its effect is often restricted to the cognitive level of identity, bereft of a “soul”, or the emotional sensibility or what’s sometimes known as affective energy. This is a description of the natural ways that humans interact and spur reactions from each other.

If you asked ChatGPT whether AI can be embodied, the answer would only be concerned with the embodiment of machine intelligence in a physical form, such as in a robot. But embodiment is also about emotions, sentiment and empathy. It cannot be reduced to linear sequences of instructions.

See also  Are our phones really designed to slow down over time? Experts look at the evidence

That’s not to say that AI doesn’t affect our feelings and emotions. But machines can never replicate the rich emotional life inherent in the interactions between two or more human beings. As our lives seem to be ever more entwined with AI, maybe we should slow the relationship down, especially as it’s clear this is far from an equal partnership.

The Conversation

The authors do not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.