728x90 AdSpace




Latest News

ad

Monday, December 28, 2015

Why Humans Are The Problem With Autonomous Cars


It's not that surprising. After all, humans make adjustments to driving style as they grow more comfortable with a particular route, road, or car. That's not to say these shortcuts are safer--seriously, why does no one use their blinkers--or that they're even a good idea, but that's what happens. Sometimes, like at a four-way stop, it's merely making eye contact with other drivers. The decisions we make are not textbook, but they work.

Computers, on the other hand, rely on data and pre-programmed decision making to navigate the road. Neither of these driving techniques is bad--though you can make a strong argument for following the law--but they just don't seem to be working out very well sharing the road. It's hard to argue against autonomous cars if the main problem is that they're too safe.

Google's cars, and other autonomous vehicles, are programmed to drive as perfectly as possible. But that means they're sometimes out of sync with the realities of the world, which has led to a number of accidents for the autonomous vehicles. Google reports the cars have been in 16 crashes since 2009, many of them where the Google Car is rear ended, but none where the autonomous cars to be at fault. In fact, only once was the Google car the reason for the crash, and it was being piloted by a human at the time.

(Team Autonomous: 16, Team Human: 0)

The Google car--and other ADAS systems--are designed to always choose the law-abiding safe route, but that's not always going to get you where you need to go. For example, the Google car had issues with four-way stops. The autonomous vehicle was designed to wait until everyone else had stopped moving before it could go. In theory, it works well. In practice, it meant that cars driven by humans were always edging forward, so the Google car was detecting movement and never actually went. The car struggles to interpret the erratic (and technically incorrect) driving habits of human drivers.

These are problems that can be solved. The potential solution for the four-way stop was to have the program inch forward and be more assertive in its desire to enter the roadway. It's not that the car would be suddenly breaking laws, but small adjustments make them more adaptable to real world driving.

That being said, if we just let the systems do the work, we might all be better off.

Late last month, a Google car was involved in an accident where a pedestrian was crossing the street in front of the car. The car began breaking, but the human driver--in an effort to ensure the safety of the pedestrian--hit the brakes a little faster than the autonomous car would have. The car was rear ended by another vehicle that had been changing lanes to move behind the Google car. This is a situation most drivers have been in. You look ahead, then look to check your mirror, go to switch lanes and don't realize the car you're now behind was braking.

The weird aspect of this story is that when Google analyzes the data from crashes, they're able to see what would have happened if the car had been allowed to complete it's own actions. In this case, the car would have braked a little more slowly, stopped a little further up, and it's possible that the accident would have been avoided entirely. That's not to say the Google driver was wrong, as he or she was just acting as a safety net for the technology. But it highlights that if the car is left to its own devices, it's able to use quick calculations to make decisions where humans rely on less scientific means to make decisions. Obviously, the car behind might still have rear ended the Google vehicle, it depends on how closely the driver was paying attention and this was a minor fender-bender at between 5 and 10 mph, but it's an interesting moment to consider. Humans have a knee jerk reaction to insist that we are smarter and more well-equipped to make decisions over computers, but we're really not.

Criticism of the technology is pretty rampant and Google has focused on making the process as transparent as possible with monthly reports and analysis of the technology. They answer frequently asked questions on these reports, which should help with some of the hesitation. For example, what about deer? The sensors in the car, which work during the day and at night, are capable of sensing deer or any other animals and the system will know that it's not a stationary object. When it moves into the road, the car slows or stops depending on the situation.

Critics also throw out a lot of curve balls when it comes to things suddenly appearing in the road. It's not a situation that happens frequently, but one that people always insist they would handle better than a computer. Google knows that sometimes things fall off trucks or people run out from behind cars. One time the car even had to navigate an electric wheelchair chasing a duck around the middle of the road--they swear that one happened. The car has 360 degree visibility for 600 feet out. It is constantly watching. This is not something that can be said for any driver on the road. The team has been testing the vehicle's reaction to sudden roadway events, including throwing paper and giant fake birds, in addition to the real world testing.

The awkward technological transition period is going to take a while, but it also involves a degree of cultural attitude change.

In the meantime, let's all dream about the day the commute can be used for something other than driving.
-
5 ( 88 ratings )
-
  • Blogger Comments
  • Facebook Comments

0 Comments:

Post a Comment