Data brokers know everything about you – what FTC case against ad tech giant Kochava reveals

Data brokers know everything about you – what FTC case against ad tech giant Kochava reveals

The data collectors see all. DrAfter123/DigitalVision Vectors via Getty Images

Kochava, the self-proclaimed industry leader in mobile app data analytics, is locked in a legal battle with the Federal Trade Commission in a case that could lead to big changes in the global data marketplace and in Congress’ approach to artificial intelligence and data privacy.

The stakes are high because Kochava’s secretive data acquisition and AI-aided analytics practices are commonplace in the global location data market. In addition to numerous lesser-known data brokers, the mobile data market includes larger players like Foursquare and data market exchanges like Amazon’s AWS Data Exchange. The FTC’s recently unsealed amended complaint against Kochava makes clear that there’s truth to what Kochava advertises: it can provide data for “Any Channel, Any Device, Any Audience,” and buyers can “Measure Everything with Kochava.”

Separately, the FTC is touting a settlement it just reached with data broker Outlogic, in what it calls the “first-ever ban on the use and sale of sensitive location data.” Outlogic has to destroy the location data it has and is barred from collecting or using such information to determine who comes and goes from sensitive locations, like health care centers, homeless and domestic abuse shelters, and religious places.

According to the FTC and proposed class-action lawsuits against Kochava on behalf of adults and children, the company secretly collects, without notice or consent, and otherwise obtains vast amounts of consumer location and personal data. It then analyzes that data using AI, which allows it to predict and influence consumer behavior in an impressively varied and alarmingly invasive number of ways, and serves it up for sale.

Kochava has denied the FTC’s allegations.

The FTC says Kochava sells a “360-degree perspective” on individuals and advertises it can “connect precise geolocation data with email, demographics, devices, households, and channels.” In other words, Kochava takes location data, aggregates it with other data and links it to consumer identities. The data it sells reveals precise information about a person, such as visits to hospitals, “reproductive health clinics, places of worship, homeless and domestic violence shelters, and addiction recovery facilities.” Moreover, by selling such detailed data about people, the FTC says “Kochava is enabling others to identify individuals and exposing them to threats of stigma, stalking, discrimination, job loss, and even physical violence.”

See also  TikTok law threatening a ban if the app isn’t sold raises First Amendment concerns

I’m a lawyer and law professor practicing, teaching and researching about AI, data privacy and evidence. These complaints underscore for me that U.S. law has not kept pace with regulation of commercially available data or governance of AI.

Most data privacy regulations in the U.S. were conceived in the pre-generative AI era, and there is no overarching federal law that addresses AI-driven data processing. There are Congressional efforts to regulate the use of AI in decision making, like hiring and sentencing. There are also efforts to provide public transparency around AI’s use. But Congress has yet to pass legislation.

The Federal Trade Commission’s suit against Kochava is set against a backdrop of minimal regulation of data brokers.

What litigation documents reveal

According to the FTC, Kochava secretly collects and then sells its “Kochava Collective” data, which includes precise geolocation data, comprehensive profiles of individual consumers, consumers’ mobile app use details and Kochava’s “audience segments.”

The FTC says Kochava’s audience segments can be based on “behaviors” and sensitive information such as gender identity, political and religious affiliation, race, visits to hospitals and abortion clinics, and people’s medical information, like menstruation and ovulation, and even cancer treatments. By selecting certain audience segments, Kochava customers can identify and target extremely specific groups. For example, this could include people who gender identify as “other,” or all the pregnant females who are African American and Muslim. The FTC says selected audience segments can be narrowed to a specific geographical area or, conceivably, even down to a specific building.

By identify, the FTC explains that Kochava customers are able to obtain the name, home address, email address, economic status and stability, and much more data about people within selected groups. This data is purchased by organizations like advertisers, insurers and political campaigns that seek to narrowly classify and target people. The FTC also says it can be purchased by people who want to harm others.

See also  How to spot a cyberbot – five tips to keep your device safe

How Kochava acquires such sensitive data

The FTC says Kochava acquires consumer data in two ways: through Kochava’s software development kits that it provides to app developers, and directly from other data brokers. The FTC says those Kochava-supplied software development kits are installed in over 10,000 apps globally. Kochava’s kits, embedded with Kochava’s coding, collect hordes of data and send it back to Kochava without the consumer being told or consenting to the data collection.

Another lawsuit against Kochava in California alleges similar charges of surreptitious data collection and analysis, and that Kochava sells customized data feeds based on extremely sensitive and private information precisely tailored to its clients’ needs.

The data broker marketplace has been tracking you for years, thanks to mobile phones and web browser cookies.

AI pierces your privacy

The FTC’s complaint also illustrates how advancing AI tools are enabling a new phase in data analysis. Generative AI’s ability to process vast amounts of data is reshaping what can be done with and learned from mobile data in ways that invade privacy. This includes inferring and disclosing sensitive or otherwise legally protected information, like medical records and images.

AI provides the ability both to know and predict just about anything about individuals and groups, even very sensitive behavior. It also makes it possible to manipulate individual and group behavior, inducing decisions in favor of the specific users of the AI tool.

This type of “AI coordinated manipulation” can supplant your decision-making ability without your knowledge.

Privacy in the balance

The FTC enforces laws against unfair and deceptive business practices, and it informed Kochava in 2022 that the company was in violation. Both sides have had some wins and losses in the ongoing case. Senior U.S. District Judge B. Lynn Winmill, who is overseeing the case, dismissed the FTC’s first complaint and required more facts from the FTC. The commission filed an amended complaint that provided much more specific allegations.

Winmill has not yet ruled on another Kochava motion to dismiss the FTC’s case, but as of a Jan. 3, 2024 filing in the case, the parties are proceeding with discovery. A 2025 trial date is expected, but the date has not yet been set.

See also  Gaslighting, love bombing and narcissism: why is Microsoft's Bing AI so unhinged?

For now, companies, privacy advocates and policymakers are likely keeping an eye on this case. Its outcome, combined with proposed legislation and the FTC’s focus on generative AI, data and privacy, could spell big changes for how companies acquire data, the ways that AI tools can be used to analyze data, and what data can lawfully be used in machine- and human-based data analytics.

The Conversation

Anne Toomey McKenna is affiliated with Institute of Electrical and Electronics Engineers (IEEE), and she serves as Chair of IEEE-USA's Artificial Intelligence Policy Committee (AIPC); this AIPC chair position involves subject matter and education-related interaction with U.S. Senate and House congressional staffers and the Congressional AI Caucus. IEEE-USA supports the proposed Algorithmic Accountability Act of 2023. McKenna has received funding from the National Security Agency for the development of legal educational materials about Cyberlaw and funding from The National Police Foundation together with the U.S. Department of Justice-COPS division for legal analysis regarding the use of drones in domestic policing.