BusinessOpinion

Why driverless vehicles just can’t quit humans

Regulators need to ask more questions about the people in the shadows

A Waymo autonomous taxi in San Francisco. It shouldn’t be viewed as a problem that self-driving cars still need support from humans behind the scenes. Photographer: David Paul Morris/Bloomberg
A Waymo autonomous taxi in San Francisco. It shouldn’t be viewed as a problem that self-driving cars still need support from humans behind the scenes. Photographer: David Paul Morris/Bloomberg

“There’s nobody in the truck,” Sterling Anderson, co-founder of autonomous truck company Aurora, said in a podcast interview last year. “We’re not Wizard of Oz-ing this thing.” Anderson was referring to the company’s plans to begin a commercial delivery service using driverless trucks between Dallas and Houston in Texas. What he meant, I think, was this: our technology is not a parlour trick. Unlike in the Wizard of Oz, there won’t be a human hidden behind the curtain.

In May this year, Aurora announced its commercial driverless trucking service had officially begun. But a few weeks later, the company made another announcement: its truck manufacturing partner PACCAR “requested we have a person in the driver’s seat, because of certain prototype parts in their base vehicle platform” and “after much consideration, we respected their request and are moving the observer, who had been riding in the back of some of our trips, from the back seat to the front seat”.

Aurora insisted this wasn’t necessary to operate the truck safely, and that the observer would not operate the vehicle. Still, it was clearly a blow to its ambition to have “nobody in the truck”. Aside from the fear that it might look like they are “Wizard of Oz-ing this thing”, the investment case for driverless trucks doesn’t look so good if you need to pay someone to sit in each one.

Aurora isn’t the only autonomous vehicle company that hasn’t quite been able to quit humans. When Tesla launched its robotaxi service last month in Austin, Texas, the cars had human “safety monitors” in the passenger seats. Even the more established self-driving taxi services, which don’t have anyone inside the car, still have humans behind the scenes. In China, Baidu’s robotaxis launched with “remote human operators” who could take control of the cars if necessary. Waymo, in contrast, doesn’t have “remote drivers”, but it does have “human fleet response agents”. Confused Waymos remain in control but can ask these humans for advice.

READ MORE

If humans are such poor drivers (as many self-driving car companies allege), why can’t supposedly superior machines cope without them? Because machines and humans are good at different things. Machines don’t get tired, bored, drunk or distracted, but they struggle with real-world “edge cases” that require contextual awareness and intuition, such as how best to navigate a blockage on the road, or what a construction worker waving his arms around is trying to tell you. On top of that, every safe system should have a backstop in case of technical problems.

In that sense, the autonomous vehicle companies should be applauded for keeping humans around. It doesn’t mean they’re trying to pull off a parlour trick. But it does mean we should know much more about how these human roles actually work.

That’s because systems that rely on a combination of machines and humans can suffer from all sorts of well-documented problems. Humans asked to be safety monitors might suffer from “automation complacency”, which is the human tendency to lose concentration while supervising autonomous systems. Remote drivers might struggle with technical issues like connection problems and poor latency. Then there are questions of liability: if a human in a support centre somewhere gives bad advice to an autonomous vehicle that leads to an accident, who is to blame? The technology company? The employer? The individual? What if that human isn’t in the same state, or even the same country?

Would I use a driverless taxi again? It obeyed traffic signs, stopped for pedestrians and drove cautiouslyOpens in new window ]

Bryant Walker Smith, an associate professor of law at the University of South Carolina, told me the onus should be on the self-driving companies to explain exactly what they’re doing and why they think it is safe. “Regulators absolutely should interrogate every piece of that.”

Tesla’s robotaxi: modest rollout, wild stock rideOpens in new window ]

Yet so far, many of these human roles have remained in the shadows. When I sent a list of basic questions to Waymo, Tesla and Aurora, only Waymo responded. The company declined to say how many people worked in its fleet response team, but it did say they were employed by Cognizant, an IT company, that they required drivers’ licences and that they were “seated in Arizona, Michigan, and in an offshore location”. When I asked about lines of accountability, the company said that “to the extent a Waymo vehicle was involved in a collision that caused property damage or injury, Waymo would be responsible for the liability imposed on it by law”.

It shouldn’t be viewed as a problem that self-driving cars still need support from humans behind the scenes. But nor should those roles be hidden away. It’s time for regulators to pull back the curtain.

Copyright The Financial Times Limited 2025