Skip to main content

Here’s what Tesla’s Autopilot 2.0 can see with its 8 cameras

Yesterday, we reported on a Tesla owner hacking into Autopilot’s debugging mode – giving insights into the back-end of Tesla’s semi-autonomous system.

Now another Tesla owner used data extracted from the hack to get a good view of what Tesla’s second generation Autopilot is seeing.

With the second generation Autopilot, Tesla is betting on computer vision and basing the system on cameras.

It has 8 cameras all around the Model S and Model X. 3 of them are front-facing, one narrow forward camera with a range of 250m, another mid-range 150m, which acts as the main camera, and a wide forward camera with a shorter range of 60m.

There are also cameras on each side of the front fenders and B-pillars – and finally, there’s a rear-facing one.

Tesla also uses radar and GPS data, but the cameras are becoming increasingly more important as Tesla uses more of them with each software update.

Using the data obtained from TMC member ‘verygreen’ as reported yesterday, another TMC member, Bjornb, overlapped the images gathered by the cameras to gives us a great look at with the Autopilot can see:

As you can see, the cameras are feeding black and white images. The cameras themselves can record in color and high-definition, but black and white images can be processed quicker.

Tesla is using NVIDIA’s Drive PX 2 computer onboard to process the images and eventually, it could probably use more of its power for its neural net and to process color images – especially for different types of traffic lights (though colors are not the only differentiator) and things of the sort. But Tesla CEO Elon Musk also said that they might need to upgrade the onboard computer in order to support level 4 or 5 autonomous driving, which would require traffic light reading.

They made the computer easily swappable especially for that reason.

The way Bjornb arranged the footage gives us a really good idea of how the fields of view of the front facing cameras work:

The ones in each bottom corners are from the cameras in the b-pillars. The only ones missing are from the front fenders.

In a demonstration of its self-driving software built on the Autopilot 2.0 hardware suite, Tesla also showed the angles from those cameras:

 

That’s the vision part of the information that Tesla’s Autopilot is using to take driving decisions. It also uses radar readings to create point cloud maps of surrounding objects.

Right now, it’s limited to keeping the vehicle in its lane and to watch out for front and side collisions, but it’s not too difficult to see how a good computer vision system could do more with this visual information.

We have to keep in mind that what it needs to beat is a set of human eyes.

FTC: We use income earning auto affiliate links. More.

Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.

Comments

Author

Avatar for Fred Lambert Fred Lambert

Fred is the Editor in Chief and Main Writer at Electrek.

You can send tips on Twitter (DMs open) or via email: fred@9to5mac.com

Through Zalkon.com, you can check out Fred’s portfolio and get monthly green stock investment ideas.