On the consumer-facing side, we have seen some decent progress with the latest Autopilot 2.0 software updates recently, but Tesla is also adding capabilities in the background running on what has been known as “shadow mode.”
A recent look at the new capabilities in the background of Autopilot gives us a glimpse of what Tesla is working on.
You remember our friend ‘verygreen’? He is a “Tesla hacker” who brought us some interesting looks at the Autopilot’s debugging mode – giving insights into the backend of Tesla’s semi-autonomous system and what Tesla’s Autopilot 2.0 can see with its 8 cameras.
Needless to say, the data that he has been able to get out of his Tesla has provided a lot of interesting insights into Tesla’s Autopilot system for the owner community.
Now he is back at it again with his latest discovery being that Tesla’s computer vision system is now able to recognize increasingly more difficult corner cases.
He recently posted this on a long but fascinating thread about Tesla’s Autopilot 2.5 capabilities (which are similar to Autopilot 2.0 in many ways):
It looks like Tesla’s neural net is able to recognize construction zones and that the automaker is now using its fleet to recognize and categorize obstacles and corner cases for Autopilot to navigate.
Other data uncovered in the thread also shows strong similarities between Tesla’s neural net and Google’s GoogLeNet, which the tech giant uses to recognize and index images.
Tesla’s new Director of AI and Autopilot Vision, Andrej Karpathy, was behind the GoogLeNet neural net when he worked at Google. One of the main differences appears to be that Tesla’s neural net is using a higher resolution than Google’s computer vision system.
Earlier this month, Tesla announced that “the foundation” of its vision neural net is now “right”. They said that it would enable a “rapid rollout” of additional features:
“Now that the foundation of the Tesla vision neural net is right, which was an exceptionally difficult problem, as it must fit into far less computing power than is typically used, we expect a rapid rollout of additional functionality over the next several months and are progressing rapidly towards our goal of a coast-to-coast drive with no one touching the controls.”
There’s also anecdotal pieces of evidence that Tesla again increased its data-gathering efforts over the last few updates in order to upload more footage from its fleet to its servers. The automaker uses this data to train its computer vision system, which could explain the recent additions of capabilities in the background.
Tesla started to really upload significant data from its Autopilot 2.0 fleet back in May, but several owners have now reported another significant uptick in uploads over the last few months.
Electrek’s Take
My interpretation of all those tidbits of information is that Tesla is strengthening its vision neural net, which acts as the backbone of the software behind Autopilot, and now feeding it with a lot more data.
In recent months, Tesla seems to have fallen behind other companies in the race to fully self-driving cars. GM has been expanding its self-driving Chevy Bolt EV program and Waymo is putting truly driverless vans on the road.
While that’s happening, Tesla was trying to rebuild its first generation Autopilot capability using its own computer vision system on new hardware.
But Tesla’s advantage was always its already giant fleet of more than 250,000 vehicles, with most of them now having some level of Autopilot hardware. Karpathy compared Tesla’s fleet to ‘a large, distributed, mobile data center’ from which they can crowdsource their Autopilot data.
With the Tesla vision neural net now reportedly “right” and the apparent floodgate of data opened, we might be about to witness some significant improvements in Autopilot 2.0 capabilites, which Tesla claims will lead to fully self-driving capability.
What do you think? Let us know in the comment section below.
FTC: We use income earning auto affiliate links. More.
Comments