Skip to main content

Watch a Tesla drive in Paris through the eyes of Autopilot

Tesla’s approach to autonomous driving is rather simple – albeit hard. It has some radar and ultrasonic sensor technology but the goal is to drive based on computer vision with cameras feeding into a neural net system that it is able to recognize what it sees – much like a human.

We are now getting our best look yet at what Autopilot can see with a drive through Paris.

Earlier this summer, we already had a rare look at what Tesla Autopilot can see and interpret from our favorite Tesla hacker ‘verygreen’ and the help of TMC user DamianXVI.

They have now teamed up again for an even better look at what Autopilot can see after they bought an Autopilot Hardware 2.5 computer on Ebay that just happened to be a fully unlocked developer version.

It enabled them learn more about how Tesla’s Autopilot works, but the duo still emphasized that their visualization is not perfect and it only includes what they understand of what Autopilot is seeing:

Green started a thread on the Tesla subreddit about their latest project and wrote:

“So keep in mind our visualizations are not what Tesla devs see out of their car footage and we do not fully understand all the values either (though we have decent visibility into the system now as you can see). Since we don’t know anybody inside Tesla development, we don’t even know what sort of visual output their tools have.”

He describes how the data is visualized in the video below:

“The green fill at the bottom represents “possible driving space”, lines denote various detected lane and road boundaries (colors represent different types, actual meaning is unknown for now). Various objects detected are enumerated by type and have coordinates in 3D space and depth information (also 2D bounding box, but we have not identified enough data for a 3D one), correlated radar data (if present) and various other properties.”

Here’s the video of a Tesla being driven (by a human) in Paris and what Autopilot sees during that drive based on Green and Damian’s research (the vehicle is using the firmware version 18.34):

Green noted a few interesting events from the perspective of what Autopilot can see to look at during the video:

  • 01:17 – traffic cones shape driveable space
  • 01:31 – construction equipment recognized as a truck (shows they have quite a deep library of objects they train against? Though it’s not perfect, we saw some common objects not detected too. Notably a pedestrian pushing a cart (not present in this video)
  • 02:23 – false positive, a container mistaken for a vehicle
  • 03:31 – a pedestrian in red(dish?) jacket is not detected at all. (note to self, don’t wear red jackets in Norway and California, where Teslas are everywhere)
  • 04:12 – one example of lines showing right turn while there are no road markings of it
  • 06:52 – another false positive – poster mistaken for a pedestrian
  • 08:10 – another more prominent example of showing left turn lane with no actual road markings.
  • 09:25 – close up cyclist
  • 11:44 – roller skater
  • 14:00 – we nearly got into accident with that car on the left. AP did not warn
  • 19:48 – 20 pedestrians at once (not that there was shortage of them before of course)

They also produced a similar video but for highway driving, which is still the main intended use of the driver assist system under the current software:

A few other highlighted points by Green:

  • 3:55 – even on “highways” gore area is apparently considered driveable? While technically true it’s probably not something that should be attempted.
  • 4:08 – gore zone surrounded by bollards is correctly showing up as undriveable.
  • 11:47 – you can see a bit of a hill crest with the path over it (Paris is not super hilly it appears so hard to demonstrate this on this particular footage)

Now here’s something even more interesting. While those videos represent a cool visualization of what Autopilot can see versus what we (humans) can see, it’s not actually a great way to look at what Autopilot can see and make decisions on since the system can’t actually interpret the background video images.

So they removed those images in another video to truly show us what Autopilot can see:

Electrek’s Take

As previously mentioned, it’s not a perfect a look, but it’s the best we have now.

For those unimpressed by the system, Green pointed out that this is truly the first third-party verification of an autonomous driving/driver assist system aside maybe from Comma.ai, who open-sourced its software.

Companies like Waymo and Cruise Automation have released similar videos, but they are curated, like Tesla’s own similar video for the launch of Autopilot 2.0, which we know now didn’t mean much about the capabilities.

Hopefully, Tesla and those other companies become more open about the development of their autonomous driving systems.

I think it would go a long way in making people better understand the systems and their limitations. Eventually, as they get better, it will also lead to people trusting the system when it can actually be trusted.

What do you think? Let us know in the comment section below.

FTC: We use income earning auto affiliate links. More.

Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.

Comments

Author

Avatar for Fred Lambert Fred Lambert

Fred is the Editor in Chief and Main Writer at Electrek.

You can send tips on Twitter (DMs open) or via email: fred@9to5mac.com

Through Zalkon.com, you can check out Fred’s portfolio and get monthly green stock investment ideas.