While Tesla’s announcement yesterday has a ton of incredibly interesting implications for the near future of the company and whole industries really, and we will get into those today or by the end of the week, let’s start by looking into the “product update” itself which is the addition of new hardware in all of the new Teslas rolling off the line in Fremont as of earlier this week.
Tesla’s new Autopilot hardware suite consists of 8 cameras, 1 radar, ultrasonic sensors and a new supercomputer to support its ‘Tesla Vision’ end-to-end image processing software and neural net, which is the real star of the show here.
What was really “unexpected by most” here is the fact that Tesla ditched the original Autopilot 2.0 suite that would have enabled level 3/4 autonomy and instead, it jumped directly to a suite that can eventually support level 5 full autonomy.
The new suite still features ultrasonics and a forward-looking radar, but as we previously reported, full autonomy requires 360-degree camera coverage, which is the main addition to the new sensor suite:
One thing that carried over from the original Autopilot 2.0 suite is the triple front-facing cameras:
Main Forward Camera: Max distance 150m with 50° field of view
Narrow Forward Camera: Max distance 250m with 35° field of view
Wide Forward Camera: Max distance 60m with 150° field of view
The front-facing cameras are housed in the rearview mirror cutout like the Autopilot camera in the first generation of the system:
Tesla worked hard for a seamless integration of the cameras around the car and it shows — or actually it doesn’t show.
The side cameras in the front fenders are actually integrated inside the Tesla badges that were already there in the previous version of the car. For the side cameras in the center of the car, Tesla made a small indentation in the center pillars between the doors.
Here are pictures of each new cameras:
Snow or ice is not a problem since the cameras are equipped with heaters.
All those cameras feed ‘Tesla Vision’, the automaker’s end-to-end image processing software with neural net. We published an exclusive report on ‘Tesla Vision’ earlier this month with more details on the system: ‘Tesla is about to increase its lead in semi-autonomous driving w/ ‘Tesla Vision’: computer vision based on NVIDIA’s parallel computing platform.’
Though we should have reported “fully autonomous” instead of “semi-autonomous”, but we didn’t expect the system to have 360 degree camera coverage.
Aside from the cameras, Tesla Vision is really the main upgrade to Tesla vehicles announced yesterday. Unlike the first generation of the system in partnership with Mobileye, there’s no third-party software involved here. The vision processing system is built on a Tesla-developed neural net running on Nvidia’s CUDA parallel computing platform.
As we reported, the system was expected to run on Nvidia hardware and while the company recently launched a few platforms built especially for self-driving cars,
Tesla went with a less expensive solution: a Nvidia Titan GPU.
Tesla says that it makes the new onboard computer over 40 times more powerful than the previous generation and it’s running on a separate channel than the computers powering Tesla’s media center unit and instrument cluster.
The biggest bummer is for current Tesla owners. The system is not retrofittable. While something like the front-facing cameras shouldn’t be too difficult to install, the side cameras would be a nightmare to retrofit and Musk said that it would likely cost more than just buying a new car.
Since we are on retrofits, Musk did say that the new vehicles will eventually be able to upgrade the new onboard Autopilot computer since the access has been made relatively easy.
That’s pretty much it for the hardware upgraded announced yesterday. We will look at the future capabilities of the new system in upcoming articles.