It’s arguable, of course, but third-party tests have shown Tesla Autopilot outperforming other semi-autonomous or advanced driver assistance systems (ADAS) from Mercedes, Hyundai and Cadillac by a wide margin. That, and the fact that Tesla is gaining more real world data in its vehicles than anyone else would seem to indicate Tesla is a leader in the field, if not the leader.
Now we learn that Tesla could be about to significantly increase its lead with ‘Tesla Vision’. Electrek has learned more details about the new program, which is an end-to-end computer vision system built with NVIDIA’s CUDA, a parallel computing platform by the graphics processing unit (GPU) maker.
Tesla first confirmed the existence of ‘Tesla Vision’ last month when responding to allegations made by Mobileye, Tesla’s former partner for the vision system of the Autopilot. At the time, we didn’t know much about the new product other than that it is meant to replace Mobileye’s contribution to the Autopilot.
Mobileye supplies its EyeQ3 chip and its image processing system to Tesla on which the automaker added some of its own software. There’s been a lot of speculation over how and with what Tesla will be replacing the system even before the ugly breakup between the two companies.
It was rumored earlier this year that Tesla could develop its own SoC, or system on a chip, after the automaker hired high-profile microprocessor engineer Jim Keller as Vice President of Autopilot Hardware Engineering, and half a dozen world-class chip architects followed him to Tesla.
Sources close to the ‘Tesla Vision’ program told Electrek that it’s actually not a SoC, but an end-to-end computer vision framework built with NVIDIA’s CUDA, a parallel computing platform. The system will be able to take raw data from camera sensors and run its own image processing to control Tesla vehicles. It will be combined with deep neural net training and work in tandem with Tesla’s recently released radar processing technology.
Tesla put a small army of PhDs in computer vision and “hardcore” software engineers on the program, including one of Microsoft’s scientists who developed the Hololens and experts in simulating human perception.
We are told that the system is unlike anything in vehicles currently available on the market today and it will act as the basis on which Tesla will be able to develop gradually more advanced autonomous features – climbing the ladder of levels of automation.
Tesla representatives didn’t immediately respond to a request to comment on this report.
CUDA is a parallel computing platform and application programming interface (API) model that allows third-parties to configure graphics processing units (GPU) to tackle especially large problems – in this case: image processing.
Since it only works on CUDA-enabled GPUs, which are made by NVIDIA, everything points toward Tesla’s next generation Autopilot hardware suite using NVIDIA hardware. Tesla already uses two NVIDIA Tegra processors in the Model S and X’s Media Control Unit (MCU) and Instrument Cluster (IC).
Last month, Elon Musk confirmed that Tesla was reaching the limit of the processing power in its vehicles after the introduction of the software update v8.0 and the new radar processing technology. As we previously reported, Tesla is expected to introduce more processing power in its vehicle with the introduction of a new suite of hardware for Autopilot 2.0.
At that time, we reported on new wiring harnesses for more sensors being installed in the vehicles going into production, but no actual sensors have been added since the introduction of the first generation Autopilot in October 2014.
Tesla can have its pick of the numerous GPUs offered by NVIDIA, but interestingly, the company also started offering new dedicated platforms for semi-autonomous and autonomous driving solutions:
The latest platform released by NVIDIA is the Drive PX 2, which the company describes as “the world’s first AI supercomputer for self-driving cars”. Its computing power is comparable to about 150 MacBook Pros and the company estimates that one can support a level 4 self-driving system while two would be necessary for a fully self-driving level 5 vehicle.
It is liquid-cooled and it needs a power input of 250-watt:
Some of those platforms could be too expensive for Tesla’s application.
As a side note, NVIDIA CEO Jen-Hsun Huang is a longtime Tesla fan and owner. He owns several Tesla vehicles, including a ‘Founder Series’ Tesla Model X P90D. He also hand-delivered the world’s first AI supercomputer in a box — a NVIDIA DGX-1 — to OpenAI in San Francisco in August. Tesla CEO Elon Musk is of course a sponsor of OpenAI
Some of these platforms have started shipping only in the past few weeks, which makes the timing particularly interesting since Elon Musk announced yesterday that Tesla is planning an event for a product unveiling on October 17. While we can’t confirm that the Autopilot 2.0/Tesla Vision will be part of the event, it looks like a real possibility.
Regardless on what hardware ‘Tesla Vision’ will be running on, it is important to note that just like with the introduction of the first generation Autopilot, Tesla is expected to release hardware in its vehicles and then gradually release more advanced autonomous features through over-the-air updates. The company is still releasing significant Autopilot updates on two-year-old hardware.