Learning to Trust a Self-Driving Car

Auto engineers face a paradox how to design an autonomous vehicle that feels safe but reminds us that no driver—human or...
Auto engineers face a paradox: how to design an autonomous vehicle that feels safe but reminds us that no driver—human or artificial—is perfect.Photograph by Beck Diefenbach / Reuters

On a clear morning in early May, Brian Lathrop, a senior engineer for Volkswagen’s Electronics Research Laboratory, was in the driver’s seat of a Tesla Model S as it travelled along a stretch of road near Blacksburg, Virginia, when the car began to drift from its lane. Lathrop had his hands on the wheel but was not in control of the vehicle. The Tesla was in Autopilot mode, a highly evolved version of cruise control that, via an array of sensors, allows the car to change lanes, steer through corners, and match the lurching of traffic unaided. As the vehicle—one of a fleet belonging to Virginia Tech’s Transportation Institute, which Lathrop was visiting that day—lost track of the road markings, he shook the wheel to disengage Autopilot. “If I hadn’t been aware of what was happening, it could have been a completely different outcome,” Lathrop told me recently.

The same week, six hundred miles south of Blacksburg, in Florida, a forty-year-old Tesla driver named Joshua Brown experienced that different outcome. His Model S, driving on Autopilot along Route 27, crunched into the side of an eighteen-wheeler, passing beneath the vehicle’s trailer, which sheared off the Tesla’s roof and windshield. Brown was killed. The crash is the subject of an ongoing inquiry—on Tuesday, the U.S. National Highway Traffic Safety Administration, which investigates defects, publicly released a series of questions that it sent Tesla earlier this month—but the company was quick to issue an explanation. In a blog post published on June 30th, the day that the accident was first announced, Tesla stated, “Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied.” The company further noted the “extremely rare circumstances of the impact.”

Journalists, engineers, science-fiction authors, ethicists, car manufacturers, and, naturally, lawyers, had long anticipated this moment. In testimony before the Senate Committee on Commerce, Science, and Transportation in mid-March, Mary Cummings, the director of Duke University’s Humans and Autonomy Laboratory, called for a “significantly accelerated self-driving testing program” in order to avoid the first fatality of semi-autonomous driving. That time is now past, and the challenge of persuading customers of the trustworthiness of these vehicles has become even more salient.

As Tesla pointed out in its blog post, Brown’s is the only death so far in more than a hundred and thirty million miles of Autopilot driving. Google’s fleet of similar vehicles has, according to the company, driven more than 1.5 million miles with only one minor collision, a fender bender involving a self-driving Lexus S.U.V. and a bus. Uber, a company for which self-driving taxis may become the full and final act of putting cabbies out of work, has a test car on the road in Pittsburgh that it hopes will make “transportation as reliable as running water.” Indeed, so far, autonomous vehicles have had an exemplary safety record. Tesla’s data demonstrates a slight improvement over humans, who, according to a 2015 report by the nonprofit U.S. National Safety Council, account for 1.3 deaths per hundred million vehicle miles—nearly thirty-three thousand people a year. But it is a flawed comparison. Autopilot is designed for use only on freeways, where human drivers, too, have far fewer accidents. And even if travelling by autonomous vehicle is shown to be statistically safer, car designers will need to find ways to reassure people beyond mere numbers. According to a recent survey from AAA, only one in four U.S. drivers would place their trust in an autonomous vehicle.

“The real hurdle to the widespread adoption of autonomous vehicles is psychology,” Chris Rockwell, the C.E.O. of Lextant, a research consultancy that focusses on user experience, told me. “People will forgive other humans much more quickly then they will technologies when they fail.” At Volkswagen, Lathrop is currently working on the problem with Traffic Jam Pilot, which is expected to feature in the next Audi D-Class. The system can control the car and issue a warning should a human be seen by its cameras to fall asleep at the wheel—a provision similar to Tesla’s requirement that drivers keep their hands on the wheel, even when in Autopilot. “There are three key ways to make the occupants of a self-driving car feel safe,” Lathrop said. “It must be clear when the vehicle is operating in autonomous mode. Occupants must know that the car is sensing its environment—other vehicles and pedestrians and so on. Finally, the vehicle must prime people before it makes a maneuver. There’s nothing more disconcerting for passengers than when a driver makes abrupt lane changes or swerves.” According to a paper published in 2014 in the Journal of Experimental Social Psychology, the more humanlike the car’s alert features—name, voice, gender—the more people trust it to operate competently, as a human driver would. The Model S indicates visually whether Autopilot is engaged, but some users have complained about the absence of a voice prompt.

The problem of trust faces outward as well as inward. How do pedestrians and other drivers distinguish a vehicle that is driving autonomously from one that is not? A recent memorable YouTube clip shows a group of men in suits at a car dealership testing a Volvo that they believed to be equipped with sensors to prevent the car from hitting pedestrians. The crash-test-dummy volunteer tosses a smile at the camera as the engine starts. He instinctively braces for impact. The treacherous car knocks him onto its hood. (Volvo later claimed that the vehicle in the clip was not equipped with the appropriate sensors, which cost extra.) The same day that the video, which has been viewed more than five million times, was uploaded, Google was awarded a patent for an adhesive hood, designed to stick a human to the front of a self-driving car, preventing secondary injuries caused by tumbling into the windshield or rebounding onto the asphalt. Volkswagen’s solution to instilling pedestrian trust is rather more mundane. The company has tested an autonomous Audi A7 that features a strip of L.E.D.s facing out of the front windshield. The lights blink and follow pedestrians at a crosswalk to signal that the car sees them—the equivalent, perhaps, of a friendly wave of the hand.

While one’s first time behind the wheel of an autonomous car (or in front of it) may feel perilous, Volkswagen’s research has shown that trust between human and vehicle blossoms rapidly and, in many cases, completely. A decade ago, according to Lathrop, the company ran a series of internal studies in which it put people in the driver’s seat of a car that they were informed was fully autonomous. Behind them, behind a curtain, sat a driver, who controlled the car using a camera feed of the road ahead, as if playing a video game. “We found that people get comfortable very quickly—almost too quickly, in fact—in letting the car drive itself,” Lathrop said. Unyoked from the activity of driving, most people experience what researchers refer to as passive fatigue, a state in which awareness is dulled. It can set in after as little as ten minutes. While wearied by inactivity, a car’s occupants typically look for distractions. Frank Baressi, the sixty-two-year-old driver of the truck that killed Brown, claims that when he approached the wrecked Tesla he heard one of the Harry Potter films playing inside the car. Investigators found both a portable DVD player and a laptop inside.

Tesla strenuously warns consumers to pay attention while their car is in autonomous mode, but the caveat may not be strong enough. On Thursday, Laura MacCleery, the vice-president of consumer policy and mobilization for Consumer Reports, said that the very name of Tesla’s self-driving feature—Autopilot—“gives consumers a false sense of security.” Lathrop is working with his colleagues at Virginia Tech on using drivers’ phones or tablets to pass along alerts while they’re at the wheel, to make them harder to ignore. But whether or not Brown was distracted at the time of the collision, Lathrop said that, as autonomous systems improve and trust in them increases, the temptation for occupants to do other things will grow stronger. “We are not naïve,” he said. “But ultimately the operator of the vehicle is responsible for having some degree of situational awareness. When it comes to autonomous cars, it’s a system. It’s a machine. It’s not making decisions. It’s not aware of everything. It’s simply sensing its environment and responding as it has been trained.” This is the paradox facing auto engineers: how to design self-driving cars that feel trustworthy while simultaneously reminding their occupants that, no matter how pristine a given model’s safety record, no driver—human or artificial—is perfect. How, in other words, to free drivers from the onus of driving, while burdening them with the worry that, at any moment, they will need to take back control.