Australia’s privacy regulator just dropped its case against ‘troubling’ facial recognition company Clearview AI. Now what?

Australia’s privacy regulator just dropped its case against ‘troubling’ facial recognition company Clearview AI. Now what?

Ascannio/Shutterstock

The office of the Australian Information Commissioner announced this week it would be taking no further action against facial recognition company Clearview AI. This marked a significant victory for one of the most controversial technology companies in the world.

In 2021, Australia’s privacy regulator ruled Clearview AI broke privacy laws for scraping millions of photographs from social media sites such as Facebook and using them to train its facial recognition tool. It ordered the company to stop collecting images and delete the ones it had already had.

However, there was no evidence Clearview AI followed this order. And earlier this year, media reports suggested the company was still going about its business as usual in Australia and collecting more images of citizens.

Given this, why did the privacy regulator suddenly stop pursuing Clearview AI? What does this mean for the broader fight to protect peoples’ privacy in the age of big tech? And how might the law be changed to give the regulator a greater chance at reining in companies like Clearview AI?

A long-running fight

Clearview AI is a facial recognition tool trained on more than 50 billion photographs scraped from social media websites such as Facebook and Twitter, as well as the wider web in general.

The company behind it was established in 2017 by an Australian citizen, Hoan Ton-That, who is now based in the United States. The site claims the tool is 99% accurate in identifying the individual in any given photo.

Earlier this month, Ton-That told Inc.Australia he expects the company’s growth in the United States to rapidly accelerate.

See also  AI will increase inequality and raise tough questions about humanity, economists warn

There will be more of these bigger enterprise deals, especially with the federal government. Plus, there are 18,000 state and local agencies in law enforcement and government alone. This could be a billion-plus or two billion dollar annual recurring revenue company.

The tool was initially offered to police authorities for trial in countries such as the US, United Kingdom and Australia. War-torn Ukraine also used Clearview AI to recognise Russian soldiers who participated in the invasion to Ukraine.

But the technology quickly sparked controversy – and legal pushback.

In 2022, the UK privacy watchdog fined Cleaview AI A$14.5 million for violating its privacy laws. However, the decision was later overruled, because UK authorities did not have authority to issue fines to a foreign company.

France, Italy, Greece and other countries in the European Union also each issued Clearview AI with $33 million or larger fines. They imposed further penalties when the company did not comply with legal orders.

In the US, the company faced a class action, which was settled in June. The settlement allowed it to continue selling this tool to US law enforcement agencies but not to the private sector.

In Australia, the privacy regulator ruled in 2021 that Clearview AI violated the country’s privacy laws by collecting images of Australians without their consent. It ordered the company to cease collecting the images and delete the collected ones within 90 days. However, it did not issue a fine.

So far, there is no evidence Clearview AI complied with the the office of the Australian Information Commissioner’s order and it is reportedly still collecting images of Australians.

See also  A year of ChatGPT: 5 ways the AI marvel has changed the world

A lack of enforcement power – and resources

Yesterday Privacy commissioner Carly Kind described the practices of Clearview AI as “troubling”. However she also said:

Considering all the relevant factors, I am not satisfied that further action is warranted in the particular case of Clearview AI at this time.

This is a disappointing decision.

Under the Privacy Act, when an organisation does not comply with a decision, the regulator can commence enforcement proceedings in court. However, in this case it chose not to do so.

The lack of further action against Clearview AI confirms the weakness of current privacy laws in Australia. In contrast to other countries, significant penalties for breach of privacy laws in Australia are very rare.

The decision also underscores the regulator’s lack of enforcement powers under current privacy laws.

Compounding this is the lack of resources at the regulator’s disposal to investigate multiple large cases. Its investigation into Bunnings and Kmart for their use of facial recognition technology has been pending for more than two years.

What can be done?

There is some hope the forthcoming privacy law reforms in Australia will both strengthen the Australian privacy law and provide more enforcement powers to the privacy regulator.

However, it is questionable whether general privacy law will be sufficient to adequately regulate facial recognition technologies.

Australian experts have instead called for special rules for high risk technologies. For example, former Australian human rights commissioner Ed Santow has proposed a model law to regulate facial recognition technologies.

Other countries have already started developing special rules for facial recognition tools. The recently adopted Artificial Intelligence Act in the European Union prohibits certain uses of this technology, and sets strict rules around its development.

See also  Factories of the future: we're spending heavily to give workers skills they won't need by 2030

However, research shows many countries around the world are still struggling to establish appropriate regulations for facial recognition systems.

The Australian government should seriously consider specific actions to both prevent companies such as Clearview AI from using personal data of Australians for the development of such technologies – and introduce clear rules about when facial recognition can be used, and when it cannot.

The Conversation

Rita Matulionyte does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.