We reported earlier this year about the MIT launching a new study on driver interaction with Tesla Autopilot features. Lex Fridman, the postdoctoral Associate at the MIT Agelab responsible for the study, is presenting it as a way to provide big data to prove that advanced driver assist features, like the ones offered by the Tesla Autopilot, are safer than driving without them.
Fridman presented the ongoing study in more details during TMC Connect earlier this summer and the Tesla Motors Club released this weekend the presentation in full (embedded below). It’s worth a watch.
The idea is quite simple: put cameras pointing toward the driver and screens in Tesla vehicles with Autopilot and study their interactions with the technology.
As you can see from the picture above, they can detect the driver’s gaze and logging in any interaction with the steering wheel or center touchscreen, as well as detecting when the Autopilot is activated with the camera monitoring the instrument cluster.
By synchronizing all those camera feeds and using an image processing system, the research group is able to log the events without having to monitor the drivers themselves. They currently have over 1,000 hours of data accumulated over more than 30,000 miles in 9 Tesla vehicles.
During the presentation, Fridman explained that he wants to be able to back Tesla’s claim that the Autopilot is safer than manual driving with more data than what Tesla is currently basing its claim on:
(the presentation was in July – the data from Tesla’s Autopilot program has since grown significantly)
They are looking to expand the study and start collecting much more data. If you are interested in participating, you can sign up on the study’s website. Tesla drivers can earn about $1,000 for participating in the study for a year and of course, they also contribute to better understanding the impact driver assist systems have on safety.
If you are wondering that the fact the drivers know they are being watched would affect the data since they would be more inclined to keep their hands on the wheel and not interact with the surrounding tech, Fridman thinks it’s not a problem. He calls it the “nose pick factor” or the time it takes for subjects to forget that they are being filmed, which he says it generally takes less than a minute.
Fridman claimed that his team has been in contact with Tesla about the study and he hopes that they will participate. He also said that he expects Tesla will eventually incorporate a driver facing camera at some point:
“There is not a single car on the road today that has a driver facing camera – or at least from any popular automaker – and that seems to be a huge missing piece, especially with automation, the car should know what you are doing and right now Tesla doesn’t know what you are doing except with the pressure sensors on the steering wheel and that’s it. If it wants to have a better connection with you, be able to communicate more effectively with you, it needs to know what you are doing.”
I would also think that a camera inside the cabin could also be useful for Tesla’s upcoming ‘Tesla Network’ car-sharing program.
Of course, there’s always the concern of privacy. We don’t know how a OEM like Tesla would implement a system, but the MIT study is already taking that into account. All collected data is kept secure on MIT servers and can be removed on request. Also, as previously mentioned, the researchers rarely have to actually look at the video feeds since their image processing system is logging the events.
Images shared like the one above and below are of members of the research team and not of actual subjects.
Here you can watch Fridman’s presentation in full from TMC Connect:
FTC: We use income earning auto affiliate links. More.
Comments