Tesla is working on its new Autopilot Hardware 3, which consists of a new neural net computer that they claimed will be the ‘world’s most advanced computer for autonomous driving’.
The company has now leaked some info about the new self-driving computer in its latest software update.
The current Autopilot computer in Tesla’s vehicles is powered by Nvidia GPUs.
CEO Elon Musk says that it is capable of processing 200 frames per second and Tesla’s hardware 3 computer, which is optimized to run a neural net, will be able to handle 2,000 frames per second with redundancy.
It’s an improvement by a factor of ten and it is partly enabled by Tesla designing its own SoC and combining it with a custom computer.
We now learn more about the new computing architecture as Tesla hacker verygreen, known for finding a lot of information in Tesla’s software update, has uncovered a lot of interesting details in Tesla’s latest software update.
“We believe the new hardware is based on Samsung Exynos 7xxx SoC, based on the existence of ARM A72 cores (this would not be a super new SoC, as the Exynos SoC is about an Oct 2015 vintage). HW3 CPU cores are clocked at 1.6GHz, with a MALI GPU at 250MHz and memory speed 533MHz.”
The hardware is combined with a Tesla PCI-Ex device named “TRIP”, which they believe work as the NN accelerator.
They think that the Hardware 3 computer is equipped with at least 2 of those TRIP devices however it could be as many as 4.
verygreen wrote about the device:
“The “TRIP” device obviously is the most interesting one. A special firmware that encompasses binary NN (neural net) data is loaded there and then eventually queried by the car vision code. The device runs at 400MHz. Both “TRIP” devices currently load the same NNs, but possibly only a subset is executed on each?”
They managed to look at the software that Tesla is running on ‘TRIP’.
The hacker continued:
“The “TRIP” software seems to be a straight list of instructions aligned to 32 bytes (256 bits). Programs operate on two types of memory, one for input/output and one for working memory. The former is likely system DRAM and the latter internal SRAM. Memory operations include data loading, weight loading, and writing output. Program operations are pipelined with data loads and computations interleaved and weight fetching happening well upstream from the instructions that actually use those weights. Weights seem to be compressed from the observation that they get copied to an internal region that is substantially larger than the source region with decompression/unpacking happening as part of the weight loading operation. Intermediate results are kept in working memory with only final results being output to shared memory.”
DamianXVI managed to create a graphical visualization of the data flow for “some of the networks observed in the binaries”:
Interestingly, they found the visualization to be very similar to GoogleNet and it’s not the first time Tesla’s neural net showed strong similarities with Google’s GoogLeNet, which the tech giant uses to recognize and index images.
Tesla’s Director of AI and Autopilot Vision, Andrej Karpathy, was behind the GoogLeNet neural net when he worked at Google.
The automaker aims to bring the new Hardware 3 computer to production during the first half of 2019 and gradually deploy new software to those vehicles through over-the-air software updates.