Tesla’s Autopilot semi-autonomous system is partially known for its ability to improve itself with over-the-air software improvements. But if Tesla wants to update the actual hardware, that’s a different story. To fix that, Tesla’s reportedly planning major camera and sensor upgrades for Autopilot 2.0—and possibly full autonomy.
If Tesla’s going to achieve company CEO Elon Musk’s goal of full autonomy by 2019 at the latest, it’s going to need to step up the physical tech that comes with each car.
That’s what could be the next generation in Tesla’s Autopilot, according to Electrek:
The new sensor suite will enable level 3 autonomous driving and potentially level 4 fully autonomous driving in a not too distant future.
[...]
Sources with knowledge of the Autopilot program told us that the new suite will keep the current front-facing radar and add more around the car, likely one in each corner. Additionally, the system will feature a new front-facing triple camera system for which we are told Tesla started installing new housing in the Model S production this week.
[...]
The front-facing triple camera system is likely based on, but not part of, Mobileye’s ‘Front-facing Trifocal Constellation’. That system features a main 50° field of view (FOV) camera for general inputs, a narrow camera with a 25° FOV for redundancy in object detection, lanes and traffic lights, and a fisheye 150° FOV wide camera for parallel lanes, lane cutting, cyclist and pedestrian detection.
Tesla first started getting ready for its semi-autonomous Autopilot in 2014, when it began shipping cars with hardware like a front-facing camera, radar and a 360° ultrasonic sensor. The version 7.0 update of Tesla’s operating system from 2015 added features like Autosteer and Autopark.
Tesla’s system is constantly getting better, the company says, thanks to complex machine-learning algorithms, and more significant improvement get numbered iterations. Tesla’s operating system is in version 7.1 right now, for example.
And that’s probably about as far as Tesla can go under the current tangible limitations, barring incremental improvements in driving ability from software. Tesla can do all the tinkering it wants with software, but when it comes to a major hardware limitation—such as in the incident that killed Joshua Brown, when his car’s cameras couldn’t see the truck in front of it – the company needs to make major changes.
And when it comes to full autonomy, the system’s clearly limited, too. While it can recognize speed limit signs, for instance, it still doesn’t know what to do when it comes to something like a street light.
It will be interesting to see how all of those new systems work together, and how well they all work together, if the report is indeed correct. Will there just be one big leap to a fully autonomous mode, or will it be little things first? And what would the system’s limitations be? And how will it cope with a driver that’s inattentive or worse yet, incapacitated?
We have reached out to Tesla for additional comment and were told, “We don’t comment on rumor and speculation,” by a Tesla spokeswoman.
0 Comments:
Post a Comment