Autonomous vehicles have long been poised as a life-saving grace where being on the road is concerned. Car makers like Toyota, Volvo, Mercedes-Benz and Kia, to name a few, have all long been believers of the self-driving car, optimistically championing its rightful place in our future.
Yet, you’ve heard it all before, and have probably wondered about it yourself, “could autonomous vehicles ever really be safer than humans?” What if a driverless car decides one time that in an attempt to save its own passengers, it must take a course that crashes into a crowd of pedestrians?
Scary, isn’t it? That’s the precise sentiment shared by critics and skeptics of autonomous vehicles around the world. How can you blame them, too? Do you truly believe that a computer would be capable of making a rational and ethical decision in a split-second’s notice?
At the recent Consumer Electronics Show (CES), Automotive News caught up with Gill Pratt, head of the Toyota Research Institute (TRI), to quiz the expert about this. To explain, TRI is an organisation funded by Toyota Motor Corp. with a special focus on artificial intelligence and robotics.
Pratt says that, “I think it’s important not to get too hung up on these extraordinarily contrived rare cases where we have the mistaken belief that human beings actually solved the problem well. Human beings don’t solve the trolley problem well.”
For those of you who are unaware as to what the “trolley problem” is, it is simply a question that asks what would you do if there was an unstoppable trolley on its tracks and headed towards killing a crowd of five people. You have a switch that could change the trolley’s course, redirecting it towards where just one person stands on the tracks. Which would you let die? The original five, or the alternative one person?
The TRI boss firmly believes that humans and computers are capable of making the same mistakes. But in the greater scheme of things, however, Pratt says that the odds favour computers to do the safer job.
Acknowledging that computers are bound to make mistakes here and there, Pratt says, “as part of this drastic reduction in fatalities and accidents (thanks to autonomous cars), there are still going to be some cases where the car had no choice, and it’s important that we as a society come to understand that.”
He continued, “I do worry a lot about the learning curve in making sure we don’t mistakenly think the machine is better than it is and certify it to operate in a set of conditions beyond what it actually can do in a safe way.”
“I think it is inevitable that there will be a learning curve in autonomy as well. It’s not a Toyota issue. It’s not any other OEM’s issue. It’s everybody’s issue.”
Yet, you’ve heard it all before, and have probably wondered about it yourself, “could autonomous vehicles ever really be safer than humans?” What if a driverless car decides one time that in an attempt to save its own passengers, it must take a course that crashes into a crowd of pedestrians?
Scary, isn’t it? That’s the precise sentiment shared by critics and skeptics of autonomous vehicles around the world. How can you blame them, too? Do you truly believe that a computer would be capable of making a rational and ethical decision in a split-second’s notice?
At the recent Consumer Electronics Show (CES), Automotive News caught up with Gill Pratt, head of the Toyota Research Institute (TRI), to quiz the expert about this. To explain, TRI is an organisation funded by Toyota Motor Corp. with a special focus on artificial intelligence and robotics.
Pratt says that, “I think it’s important not to get too hung up on these extraordinarily contrived rare cases where we have the mistaken belief that human beings actually solved the problem well. Human beings don’t solve the trolley problem well.”
For those of you who are unaware as to what the “trolley problem” is, it is simply a question that asks what would you do if there was an unstoppable trolley on its tracks and headed towards killing a crowd of five people. You have a switch that could change the trolley’s course, redirecting it towards where just one person stands on the tracks. Which would you let die? The original five, or the alternative one person?
The TRI boss firmly believes that humans and computers are capable of making the same mistakes. But in the greater scheme of things, however, Pratt says that the odds favour computers to do the safer job.
Acknowledging that computers are bound to make mistakes here and there, Pratt says, “as part of this drastic reduction in fatalities and accidents (thanks to autonomous cars), there are still going to be some cases where the car had no choice, and it’s important that we as a society come to understand that.”
He continued, “I do worry a lot about the learning curve in making sure we don’t mistakenly think the machine is better than it is and certify it to operate in a set of conditions beyond what it actually can do in a safe way.”
“I think it is inevitable that there will be a learning curve in autonomy as well. It’s not a Toyota issue. It’s not any other OEM’s issue. It’s everybody’s issue.”
0 Comments:
Post a Comment