728x90 AdSpace




Latest News

ad

Tuesday, January 5, 2016

Cambridge University new systems for driverless cars



Two new systems for driverless cars have been developed in Cambridge, and have the potential to replace sensors costings tens of thousands of pounds.

The separate but complementary systems have been designed by researchers at Cambridge University. The first, called SegNet, distinguishes between the various components of a road scene in real time on a regular camera or smartphone, while the other can identify a user's location and orientation in places where GPS does not function,

Although the systems cannot currently control a driverless car, Professor Roberto Cipolla, from the university's engineering department, said the ability to make a machine 'see' and accurately identify where it is and what it's looking at is a vital part of developing autonomous vehicles and robotics.

"Vision is our most powerful sense and driverless cars will also need to see," said Professor Cipolla, who led the research. "But teaching a machine to see is far more difficult than it sounds."

SegNet can take an image of a street scene it hasn't seen before and classify it, sorting objects into 12 different categories – such as roads, street signs, pedestrians, buildings and cyclists – in real time. It can deal with light, shadow and night-time environments, and the researchers say it currently labels more than 90% of pixels correctly.

Previous systems using expensive laser or radar based sensors have not been able to reach this level of accuracy while operating in real time.

Users can visit the SegNet website and upload an image or search for any city or town in the world, and the system will label all the components of the road scene. The system has been successfully tested on both city roads and motorways.

SegNet learns by example, and had to be "trained" by a group of Cambridge undergraduate students, who manually labelled every pixel in each of 5,000 images, with each image taking about 30 minutes to complete. Who says students are lazy? Once the labelling was finished, the researchers then took two days to 'train' the system before it was put into action.

"It's remarkably good at recognising things in an image, because it's had so much practice," said Alex Kendall, a PhD student in the Department of Engineering. "However, there are a million knobs that we can turn to fine-tune the system so that it keeps getting better."

The second part of the package is able to localise a user and determine their orientation from a single colour image in a busy urban scene. The system is far more accurate than GPS and works in places where GPS does not, such as indoors, in tunnels, or in cities where a reliable GPS signal is not available.

"In the short term, we're more likely to see this sort of system on a domestic robot – such as a robotic vacuum cleaner, for instance," said Cipolla.

"It will take time before drivers can fully trust an autonomous car, but the more effective and accurate we can make these technologies, the closer we are to the widespread adoption of driverless cars and other types of autonomous robotics."



-
5 ( 88 ratings )
-
  • Blogger Comments
  • Facebook Comments

0 Comments:

Post a Comment