When Google started reporting accident statistics for its driverless cars in May, the technology giant learned something important that it has incorporated in the second iteration of its vehicles, according to a company executive.
“We decided that we could not rely on humans taking over the car,” said Vint Cerf, vice president and chief Internet evangelist for Google.
In the original design of the cars, “we thought that we would always have a steering wheel, accelerator and a human being at the wheel,” Cerf said. If the car got into trouble and didn’t know what to do, it would announce the predicament, alerting a person to take the wheel.
What Google found, however, was that it takes time—anywhere from 25-30 seconds in some cases—for people who were distracted and doing something else because they thought the car was driving itself to understand the current situation that the car can’t react to either. “And of course if you were sleeping in the back, drinking or something, that’s even worse,” Cerf said.
Although overall crash statistics suggest that Google driverless cars are working well—having logged 1.7 million miles and being involved in only a dozen accidents (mostly rear-enders caused by drivers in other cars), the potential for human error from people inside driverless cars is becoming increasingly evident.
“In at least one case, the accident was caused by a human driver. The car wasn’t driving itself. It was the human,” Cerf said.
New versions of the Google car don’t have a steering wheel, accelerator or brake—”at least not one that a human being can get access to,” he said. “The conclusion is that we have to make it work at all times.”
“It’s likely that if these cars go into service—there is no current plan for that—that they would run [on] city streets or in confined environments but not necessarily door to door,” he added. “That’s where we are right now in the second iteration of driverless cars.”
Cerf’s disclosure came after a presentation he made at the Global Insurance Forum of the International Insurance Society on Monday, in response to an audience member who questioned whether driverless cars would reduce accidents and insurance premiums. She asked: Would this reduction, in turn, prompt insurers to turn their attention elsewhere for new revenue sources, such as serving insurance needs of emerging markets?
Filling insurance gaps, such as the insurance needs of people in emerging market countries, is an overriding theme of the Forum, which continues on Tuesday and Wednesday. Cerf agreed that if the premise of accident reduction holds, insurers would indeed need to rethink business strategies. For most of his talk, Cerf focused on a different protection gap—the gap in protecting makers and users of software from the consequences of software failures.
Responding to the audience member’s question, he reported on Google car accident statistics and also imagined new laws being created to prohibit unreliable humans from driving cars—except in restricted environments such as on race tracks and off roads.
He went on to address the broader question of how auto insurers will cope with a world in which accidents decline, comparing their fate to that of newspaper publishers.
“Any business that realizes that its returns are diminishing has to adapt. This lesson is sometimes hard for people to learn if [they] have a business model that has been working for a long time,” he said, recounting how newspaper empires built on a model of low-cost distribution of information (on cheap paper) plus advertisements has been displaced by electronic delivery. “One day electronics came along, networks came along, devices that are cheap that we all carry, [and] suddenly it’s cheaper to send bits than paper. We don’t have to wait for a printer to send [the paper] out. We don’t have the same deadline problems. And we don’t have to show you the same ads. We can show you different ads.
“Suddenly, the economics and the dynamics of this business have changed completely. You have to figure out a new business model,” he said. In the same way, “this industry [insurance] has to be creative and innovative about understanding what’s happening to its business.”
At this point, Cerf returned to the main point of his presentation: software flaws and the need for protection from the consequences. Talking about more than simply filling a gap with insurance products, Cerf seemed to advocate that insurers use underwriting and pricing power—and risk management and loss control advice gleaned from data and statistics—to incentivize the development of safer software.
“I would stress to you that the presence of software in this highly permeated environment is a game-changer for the insurance industry,” he said, noting that businesses and individuals don’t need as much protection against mechanical failure as they did in the past but instead need protection from software and programming failure. “We don’t have a lot of experience with that yet.
“I don’t know of any programmers that have succeeded in writing bug-free code,” said Cerf, who used to make his living writing software. When that code is assembled with other people’s code, the complexity of risk in the ensemble is enormous, he said. “Bugs get exploited…They cause things to not work right or bad guys to get in,” he noted, highlighting particular concerns about software engines for the Internet of Things.
“You have this enormous opportunity to convince people that there is risk associated with software and they should be protecting against that, which suggests to me there is big business to be had. But at the same time it’s not 100 percent clear how to assess the risk,” Cerf admitted.
“I am concerned that we use the power of the insurance concept in order to achieve a couple of objectives. One is to provide the kind of protection that you do for other kinds of risk and liability. But the other one is to create incentives for people to make software safer and better than it is today,” he said, highlighting the recent breach of personal information of federal employees at the Office of Personnel Management.
“There is no amount of money that will help you recover from that failure. What we want is incentive and pressure for people to not only write safer software but to test the systems and go through a great deal of trouble to makes sure that [passes] for authentication are improved.”
Like insurers, Cerf has questions about exactly how to accomplish the goal. “One thing I wonder about is this notion of reasonable steps to protect software and make it safer or less vulnerable. I don’t know what reasonable means yet,” he admitted.
Advice for Insurers and Reinsurers
Though Cerf disclosed that he hasn’t thought through all the answers, a Forum attendee asked him to give more specific advice to insurers and reinsurers writing coverages to insure against software failures.
Cerf delivered two recommendations: one related to categorizing software according to the severity of consequence of failure, and another related to authentication practices.
Explaining the first recommendation, he contrasted medical device software with applications on smartphones that relay the times and routes of local buses. The apps require much less attention than special software systems that allow surgeons to operate by looking at 3D displays and manipulating flexible devices through three or four small incisions. He described the da Vinci surgical system of Intuitive Surgical as an example, stressing that he wasn’t picking on that particular company but instead highlighting the type of equipment that requires high levels of scrutiny. Medical device companies “ought to be held to a very high level of assurance that a lot of testing and validation has been done to that software.”
Cerf also told IIS member insurance executives that some bugs, known as buffer overruns, are common across any software that involves input and output. “This is where input coming in takes up more room than the programmer allowed in space to store that incoming information,” he said. “What often can happen is that the additional information that overflowed the buffer ends up landing in a place where it doesn’t belong, and it could potentially become executable—which means you now can burrow into somebody’s operating system by overlaying a few instructions and then taking over control of the machine,” he said.
“That particular thing is a pervasive problem in a lot of software. So if we could get tests that were done—or if people could show that they tested against that particular failure—that might decrease the [risk related to] software with that kind of vulnerability.
“A lot of work needs to be done to categorize the various kinds of software that we are relying on and the way it works and whose fingers are in the pie,” he said, noting that this exercise is more complicated when the Internet is involved “because now potential fingers are almost anywhere.”
Cerf advocates strong authentication practices. “At Google, we can’t get into our systems without [giving a] username and password [and] also a hardware-generated cryptographic password which is used only once,” he said, describing a randomly generated password displayed on a small device.
“If we have lots and lots of devices that are willing to accept control from outside or provide data outside, you want those devices only to accept control from someone authorized,” he said, providing examples of situations where outside control is advantageous and when it is dangerous.
While it might be a good idea for police officers to be able to look at images from web cameras inside a house before they enter, or for fire fighters to have access to information about whether people are trapped inside a burning building, seemingly innocuous information—like temperature data from heating sensors inside a house—can become problematic in the wrong hands. “If you were to look at that [temperature] data over time, you might be able to figure out when people are home, how many, when they come and go,” suggesting that criminals can infer a lot of valuable information from devices if control of devices and access to their information isn’t adequately authenticated.
Google Compare
Cerf also addressed a question about Google Compare. Admitting upfront that he didn’t know much about it, he offered a general observation: “Our motto says, ‘Organize the world’s information and make it more accessible and useful.’ So presumably information about insurance coverage and insurance rates and so on is useful.
“The real question to which I do not know the answer is how the data is collected and how the comparisons are done. It’s pretty clear that coverages vary from one policy to another, from one type of insurance to another,” he observed.
“One of the most important things you and we can do is to help the individual consumer understand what things are comparable. Otherwise, if you’re just looking at one metric like what’s the premium every month, you may not understand exactly what it was that premium covered.”
Cerf went on to confess that when he reads insurance policies he often has trouble figuring out exactly what’s covered. “Not to be nasty up here, but sometimes I think that that’s on purpose”
“In fact, this clarity is important,” he said, reminding insurers about a law mandating the use of plain language in insurance policies.