Humans help AI software to ‘detect blind spots in self-driving cars’

Artificial Intelligence, Car stuff, News
Share


Blind spots in the artificial intelligence (AI) of self-driving cars could be corrected using input from humans, new research claims.

A team of researchers from the Massachusetts Institute of Technology (MIT) and tech giant Microsoft has developed a model which first puts an AI system through simulation training before putting a human through the same scenario in the real world, with the AI learning any changes in behaviour it needs to make as it observes.

The system has so far only been tested in video games, but study author Ramya Ramakrishnan, a graduate student in MIT’s computer science and artificial intelligence lab, said: “The model helps autonomous systems better know what they don’t know.

“Many times, when these systems are deployed, their trained simulations don’t match the real-world setting and they could make mistakes, such as getting into accidents.

“The idea is to use humans to bridge that gap between simulation and the real world, in a safe way, so we can reduce some of those errors.”

The team uses the example of a driverless car system not knowing the difference between a white truck and an ambulance with its lights flashing, only learning to move out of the way of an ambulance after receiving feedback from the human tester.

The researchers said they have also used an algorithm known as the Dawid-Skene method, which uses machine learning to make probability calculations and spot patterns in scenario responses that can help it to determine whether something is truly safe or still contains the potential for some problems.

This is to avoid the “extremely dangerous” situation of the system becoming overconfident and marking a situation as safe despite only making the correct decision 90% of the time – instead it will be aware of the final 10% and look for any further weaknesses it may need to address.

“When the system is deployed into the real world, it can use this learned model to act more cautiously and intelligently,” Ms Ramakrishnan said.

“If the learned model predicts a state to be a blind spot with high probability, the system can query a human for the acceptable action, allowing for safer execution.”

Chris Price
For latest tech stories go to TechDigest.tv