Driverless cars: what we’ve learned from experiments in San Francisco and Phoenix

Driverless cars: what we've learned from experiments in San Francisco and Phoenix

Cruise, owned by General Motors, is one of the "robotaxi" companies operating in San Francisco. Shutterstock / paulaah293

Residents of San Francisco and Phoenix have grown used to witnessing something that, a decade ago, would have seemed magical. In some parts of these cities, at certain times, cars drive by with nobody behind the wheel.

Driverless “robotaxi” services pick up customers and ferry them to their destinations with the help of cameras, sensors and software that uses artificial intelligence. Tests of fully driverless vehicles have been under way in Phoenix since 2017 and in San Francisco since 2020.

Excitable videos posted online show customers embracing the novelty. But new possibilities bring new questions. While these real-world experiments are limited in scope, they could help decide the future of road transport everywhere. It’s vital that lessons are learned and the results opened to scrutiny.

A few years ago, when hype surrounding self-driving cars was huge, some high-profile crashes brought attention to the ethics of experimenting with new technologies in public spaces.

US states encouraged experimentation by dropping regulatory barriers, with cities, citizens and transport policymakers having little say. After a period of testing with safety drivers, some cars are now fully driverless.

While the companies learn to drive safely in complex environments, San Francisco and Phoenix are learning whether the technology is creating more problems than it promises to solve.

Cruise (owned by General Motors) is now operating 30 driverless cars at night in all but the busiest parts of San Francisco. Just before Christmas, the company said it wanted to add more cars, operate during the day, and move into the city’s busiest downtown area.

But San Francisco’s transportation authority raised objections. In the last year, Cruise cars have been involved in a number of incidents that, while not directly life-threatening, were really annoying for a city trying to go about its business.

A Cruise car with nobody inside was pulled over by police officers, who were unsure what to do. To the amusement of people filming, the car then pulled away from the confused cops.

See also  Cyber peacekeeping is integral in an era of cyberwar – here's why

A Cruise driverless taxi pulls away from police in San Francisco.

Cruise cars have also frustrated the city’s fire department by blocking fire trucks and driving towards hoses. In one case, firefighters were forced to smash a car’s windscreen to get it to stop. The cars have impeded local buses, blocked junctions and stopped in the middle of the road, sometimes in groups.

Some incidents would have counted as everyday snarl-ups if a human was behind the wheel, but the absence of anyone in the car to take responsibility has made it hard for city authorities to know what to do.

The streets of San Francisco

In almost all cases, we only know about incidents because of online videos or reports by local people. There are few duties on the companies to report performance or admit their foibles.

These incidents, and the absence of accountability, are clearly trying the patience of San Francisco’s transport planners. Rather than a free-for-all, they would like to see what they call “limited deployments with incremental expansions” so that impacts can be assessed carefully.

Waymo car

Self-driving car company Waymo is owned by Google.
Shutterstock / Sundry Photography

They would also like to keep driverless cars out of the city’s busiest downtown core – and, crucially, want to see more data-sharing. This would make the self-driving experiment more democratic, but cuts against the grain of the Silicon Valley approach to “blitzscaling” – growing rapidly to establish a monopoly.

Self-driving car companies would argue that the more cars they have and the more complex their environments, the quicker they can learn to drive. This argument is premised on the idea that robot drivers are just like human drivers, but better. In reality, self-driving cars are not “autonomous vehicles”, as is often claimed.

They rely on digital and physical infrastructures that support their operation, as well as teams of humans behind the scenes doing the data-labelling, remote operation and customer support that is needed to make them appear “driverless”. These cars work best in car-friendly areas where pedestrians and other road users behave predictably.

See also  Why we talk about computers having brains (and why the metaphor is all wrong)

Changing the rules

Even if driverless cars avoid the errors that humans make when drunk or distracted, they make different sorts of mistakes. New modes of transport do not just add another player to the game; they change the rules. When cars arrived in cities in the early 20th century, pedestrians were persuaded or bullied out of the way and infrastructures were remade to suit the new technology.

In the 21st century, many cities were spooked by the rapid disruptions wrought by ride-hail companies such as Uber and Lyft. We must avoid sleepwalking into something similar. For self-driving cars, we need a clear sense of the trade-offs.

There may eventually be safety benefits. But in making life easier for self-driving cars and the few people likely to benefit, we might make life harder for everyone else.

Competition for roadspace in dense cities is tight. As transport policy expert David Zipper has argued, most cities want to see fewer car trips overall, and more shared transit and physically active travel such as walking and cycling.

Self-driving cars could be a problem for sustainability. The more we learn from real-world uses of the technology, the greater seems the mismatch between its purported solutions and the problems facing cities.

The UK is less in thrall to tech companies, which provides an opportunity for a more measured discussion. In 2022, I was part of a team led by the Centre for Data Ethics and Innovation asking what a more responsible approach to self-driving vehicle innovation would be. We advised on safety, data-sharing, transparency and ensuring that the benefits are evenly spread.

As self-driving cars expand to more places, the social learning that happens around them will be just as important as the machine learning that drives their computers. The experiment is taking place in public, so we must ensure that its lessons are not kept private.

See also  China could be harvesting TikTok data – but much of the user information is already out in the open

The Conversation

Jack Stilgoe receives funding from the ESRC, the Turing Institute and the Centre for Data Ethics and Innovation. He is a fellow of the Turing institute and a trustee of the Royal Institution.