Tech companies collect our data every day, but even the biggest datasets can’t solve social issues

Tech companies collect our data every day, but even the biggest datasets can't solve social issues

At almost every point in our day, we interact with digital technologies which collect our data. From the moment our smart phones wake us up, to our watch tracking our morning run, every time we use public transport, every coffee we purchase with a bank card, every song skipped or liked, until we return to bed and let our sleep apps monitor our dreaming habits – all of these technologies are collecting data.

This data is used by tech companies to develop their products and provide more services. While film and music recommendations might be useful, the same systems are also being used to decide where to build infrastructure, for facial recognition systems used by the police, or even whether you should get a job interview, or who should die in a crash with an autonomous vehicle.

Every digital device collects data on you and your habits.
Fizkes/Shutterstock

Despite huge databases of personal information, tech companies rarely have enough to make properly informed decisions, and this leads to products and technologies that can enhance social biases and inequality, rather than address them.

Microsoft apologised after its chatbot started spewing hate speech. “Racist” soap dispensers failed to work for people of colour. Algorithm errors caused Flickr to mislabel concentration camps as “jungle gyms”. CV sorting tools rejected applications from women and there are deep concerns over police use of facial recognition tools.

These issues aren’t going unnoticed. A recent report found that 28% of UK tech workers were worried that the tech they worked on had negative consequences for society. And UK independent research organisation NESTA has suggested that as the darker sides of digital technology become clearer, “public demand for more accountable, democratic, more human alternatives is growing”.

Traditional solutions are making things worse

Most tech companies, big and small, claim they’re doing the right things to improve their data practices. Yet, it’s often the very fixes they propose that create the biggest problems. These solutions are often borne from the very same ideas, tools and technologies that got us into this mess to begin with. The master’s tools, as Audre Lorde said, will never dismantle the master’s house. Instead, we need a radically different approach from collecting more data about users, or plugging gaps with more education about digital technology.

See also  Defining what's ethical in artificial intelligence needs input from Africans

The reason biases against women or people of colour appear in technology are complex. They’re often attributed to data sets being incomplete and the fact that the technology is often made by people who aren’t from diverse backgrounds. That’s one argument at least – and in a sense, it’s correct. Increasing the diversity of people working in the tech industry is important. Many companies are also collecting more data to make it more representative of the people who use digital technology, in the vain hope of eliminating racist soap dispensers or recruitment bots that exclude women.


Read more:
Six ways (and counting) that big data systems are harming society

The problem is that these are social, not digital, problems. Attempting to solve those problems through more data and better algorithms only serves to hide the underlying causes of inequality. Collecting more data doesn’t actually make people better represented, instead it serves to increase how much they are being surveilled by poorly regulated tech companies. The companies become instruments of classification, categorising people into different groups by gender, ethnicity and economic class, until their database looks balanced and complete.

These processes have a limiting effect on personal freedom by eroding privacy and forcing people to self-censor – hiding details of their lives that, for example, potential employers may find and disapprove of. Increasing data collection has disproportionately negative affects on the very groups that the process is supposed to help. Additional data collection leads to the over-monitoring of poorer communities by crime prediction software, or other issues such as minority neighbourhoods paying more for car insurance than white neighbourhoods with the same risk levels.

See also  Why everyone should have to learn computer programming

Big Data is watching you.
Enzozo/Shutterstock

People are often lectured about how they should be careful with their personal data online. They’re also encouraged to learn how data is collected and used by the technologies that now rule their lives. While there are some merits to helping people better understand digital technologies, this approaches the problem from the wrong direction. As noted by media scholar, Siva Vaidhyanathan, this often does little more than place the burden of making sense of manipulative systems squarely onto the user, who is actually often still left powerless to do anything.

Access to education isn’t universal either. Inequalities in education and access to digital technologies means that it’s often out of reach from just those communities that are most negatively affected by social biases and the digital efforts to address them.

Social problems need social solutions

The tech industry, the media and governments have become obsessed with building ever bigger data sets to iron out social biases. But digital technology alone can never solve social issues. Collecting more data and writing “better” algorithms may seem helpful, but this only creates the illusion of progress.

Turning people’s experiences into data hides the causes of social bias – institutional racism, sexism and classism. Digital and data driven “solutions” distract us from the real issues in society, and away from examining real solutions. These digital tasks, as French philosopher Bernard Stiegler noted, only serve to increase the distance between technological systems and social organisations.

We need to slow down, stop innovating, and examine social biases not within the technology itself, but in society. Should we even build any of these technologies, or collect any of this data at all?

See also  Today's smart machines owe much to Australia's first computer

Better representation in the tech industry is vital, but their digital solutions will always fall short. Sociology, ethics, and philosophy have the answers to social inequality in the 21st century.

The Conversation

Doug Specht does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.