Valentine's Day was a bummer in Mountain View, Calif. For the first time, one of Google's self-driving cars, a modified Lexus SUV, caused a crash. Detecting a pile of sandbags surrounding a storm drain in its path, the car moved into the center lane to avoid the hazard. Three seconds later it collided with the side of a bus. According to the accident report, the Lexus's test driver saw the bus but assumed the bus driver would slow down to allow the SUV to continue.
It was not the project's first crash, but it was the first caused in part by nonhuman error (most incidents involve the driverless cars getting rear-ended by human drivers not paying attention at traffic lights). The episode shines a light on an ever looming gray area in our robotic future: Who is responsible—and pays for damages—when an autonomous vehicle crashes?
The sense of urgency to find clear answers to this and other self-driving vehicle questions is growing. Automakers and policy experts have worried that a lack of consistent national regulation would make rolling out these cars across all 50 states nearly impossible. To spur progress, the Obama administration asked the Department of Transportation to propose complete national testing and safety standards by this summer. But as far as the question of accountability and liability goes, we might already be homing in on an answer, one that points to a shift in how the root cause of damage is assessed: When a computerized driver replaces a human one, experts say the companies behind the software and hardware sit in the legal liability chain—not the car owner or the person's insurance company. Eventually, and inevitably, the carmakers will have to take the blame.
Self-driving pioneers, in fact, are starting to make the switch. Last October, Volvo declared that it would pay for any injuries or property damage caused by its fully autonomous IntelliSafe Autopilot system, which is scheduled to debut in the company's cars by 2020. The thinking behind the decision, explains Erik Coelingh, Volvo's senior technical leader for safety and driver-support technologies, is that Autopilot will include so many redundant and backup systems—duplicate cameras, radars, batteries, brakes, computers, steering actuators—that a human driver will never need to intervene and thus cannot be at fault. “Whatever system fails, the car should still have the ability to bring itself to a safe stop,” he says.
The proliferation of vehicles already on the road with partial automation shows how quickly the scenario that Coelingh describes is coming about. A growing number of cars include crash-imminent braking systems, which rely on optics to detect potential front-end impacts and proactively apply brakes. Audi, BMW and others have developed cars that can parallel park themselves. And later this year Volvo will roll out the U.S.'s first semiautonomous highway driving feature, called Pilot Assist, on the 2017 S90 sedan. The system uses a windshield-mounted computer equipped with a camera and radar to automatically accelerate, decelerate, avoid obstacles and stay in a lane at speeds of up to 80 miles per hour.
Features such as Pilot Assist exist in what tech policy expert and University of South Carolina assistant professor Bryant Walker Smith calls the “mushy middle of automation,” where carmakers still require human drivers to pay attention. “It's not always clear where the line between the human and the machine falls,” he says.
For the time being, some automakers are aiming to keep human drivers clearly on the responsible side of that line. General Motors' forthcoming Super Cruise, which will launch on a Cadillac in 2017 and is similar to Pilot Assist, comes with caveats that the human driver must remain alert and ready to take over steering if visibility dips or weather changes. With Pilot Assist, Volvo puts similar onus on the driver; touch sensors on the steering wheel ensure the person remains engaged.
By the time fully autonomous driving becomes a reality, however, carmakers such as Volvo, Mercedes and Google are confident that they will have these technologies—and many more—so buttoned up that they will be able to take the driver out of the operation and liability picture almost entirely. What is more, a 2014 Brookings Institution study found that current product liability law already covers the shift, so the U.S. might not need to rewrite any laws for automation to continue moving forward.
It is a relatively safe bet for driverless carmakers to say they will foot the bill for everything from fender benders to violent crashes because semiautonomy is showing that computer drivers are likely safer than human ones. Data from the Insurance Institute for Highway Safety, for instance, have found that crash-avoidance braking can reduce total rear-end collisions by 40 percent. And Volvo's Coelingh notes that a study of the European version of Pilot Assist revealed that the computer maintains safer follow distances and has fewer harsh braking incidents than human drivers do.
In the long run, “from the manufacturer's perspective,” Smith says, “what they may be looking at is a bigger slice of what we all hope will be a much smaller [liability] pie.”
0 Comments:
Post a Comment