Self-driving cars keep killing people. Just in the past few weeks, one crashed into a highway barrier and another ran over a pedestrian. This will inevitably lead to much hand-wringing about the ethics of the algorithms that drive autonomous vehicles.
Yet tragic as such accidents might be, I have a hard time caring. I don’t even see the issue as a priority.
At this early stage, it’s hard to say whether algorithms are more dangerous drivers than humans. In 2016, the fatality rate in the U.S. was 1.18 per 100 million miles driven (meaning about 37,000 people died) — less than a quarter the level of 1970. The handful of deaths caused by self-driving cars suggests that they’re still a bit worse, but they’ve logged so few miles that we have only about one data point so far. Given inevitable improvements, I’m confident they’ll be better than humans soon. In fact, I can imagine that it will soon be difficult to convince pedestrians that people should be allowed to drive in densely populated cities.
But again, I don’t care. Why? Because the failures are so obvious compared to those of most algorithms. Dead people by the side of the road constitute public tragedies. They make headlines. They damage companies’ reputations and market valuations. This creates inherent and continuous pressure on the data scientists who build the algorithms to get it right. It’s self-regulating nirvana, to be honest. I don’t care because the self-driving car companies have to care for me.
By contrast, companies that own and deploy other algorithms — algorithms that decide who gets a job, who gets fired, who gets a credit card and with what interest rate, who pays what for car or life insurance — have shockingly little incentive to care.
The problem stems from the subtlety of most algorithmic failures. Nobody, especially not the people being assessed, will ever know exactly why they didn’t get that job or that credit card. The code is proprietary. It’s typically not well understood, even by the people who build it. There’s no system of appeal and often no feedback to improve decision-making over time. The failures could be getting worse and we wouldn’t know it.
A while ago, journalists were writing about how good Silicon Valley companies are with software and how surprisingly bad they are with hardware such as drones and spaceships. I think that’s dead wrong. Not because startups have been building great delivery drones, but because there’s absolutely no reason to think they’re doing much better with software. We simply don’t know how to look for their failures.
There’s an emerging field of researchers dedicated to worrying about the ethics of self-driving cars. Academics, think tanks, and even policy makers are paying attention. But as far as I know, very few people have dedicated themselves to exploring the failures of credit card, insurance and job-application algorithms. Even if they wanted to, they’d have trouble doing so without a subpoena.
It’s not clear how to address algorithmic failures in general. Europe is attempting something new with the General Data Protection Regulation, which includes a vaguely worded concept of “the right to explanation.” But judging from the flaws in Facebook’s recent effort to help users understand why they are being targeted for certain ads, this might not go far enough.
Getting back to self-driving cars, I’d focus my concerns on the jobs that will be lost when people actually start to trust the algorithms. There are 3.5 million truck drivers in the U.S., and almost 9 million trucker-related jobs, many of which will be automated. And that’s not counting drivers of taxis, limos and school buses, or the workforces of Uber, Lyft and the like. Granted, the economy has proved capable of creating new kinds of work in the past. But given the probably scale and speed of the job losses to come, I think it’s time to start planning.
This column does not necessarily reflect the opinion of the editorial board or Bloomberg LP and its owners.