Skip to main content

Tesla AI4 vs. NVIDIA Thor: the brutal reality of self-driving computers

The race for autonomous driving has three fronts: software, hardware, and regulatory. For years, we’ve watched Tesla try to brute-force its way to “Full Self-Driving (FSD)” with its own custom hardware, while the rest of the automotive industry is increasingly lining up behind NVIDIA.

Now that we know Tesla’s new AI5 chip is delayed and won’t be in vehicles until 2027, it’s worth comparing the two most dominant “self-driving” chips today: Tesla’s latest Hardware 4 (AI4) and NVIDIA’s Drive Thor.

Here’s a table comparing the two chips with the best possible specs I could find. greentheonly’s teardown was particularly useful. If you find things you think are not accurate, please don’t hesitate to reach out:

Feature / SpecificationTesla AI4 (Hardware 4.0)NVIDIA Drive Thor (AGX / Jetson)
Developer / ArchitectTesla (in-house)NVIDIA
Manufacturing ProcessSamsung 7nm (7LPP class)TSMC 4N (custom 5nm class)
Release StatusIn production (shipping since 2023)In production since 2025
CPU ArchitectureARM Cortex-A72 (legacy)ARM Neoverse V3AE (server-grade)
CPU Core Count20 cores (5× clusters of 4 cores)14 cores (Jetson T5000 configuration)
AI Performance (INT8)~100–150 TOPS (dual-SoC system)1,000 TOPS (per chip)
AI Performance (FP4)Not supported / not disclosed2,000 TFLOPS (per chip)
Neural Processing Unit3× custom NPU cores per SoCBlackwell Tensor Cores + Transformer Engine
Memory TypeGDDR6LPDDR5X
Memory Bus Width256-bit256-bit
Memory Bandwidth~384 GB/s~273 GB/s
Memory Capacity~16 GB typical systemUp to 128 GB (Jetson Thor)
Power ConsumptionEst. 80–100 W (system)40 W – 130 W (configurable)
Camera Support5 MP proprietary Tesla camerasScalable, supports 8MP+ and GMSL3
Special FeaturesDual-SoC redundancy on one boardNative Transformer Engine, NVLink-C2C

The most striking difference right off the bat is the manufacturing process. NVIDIA is throwing everything at Drive Thor, using TSMC’s cutting-edge 4N process (a custom 5nm-class node). This allows them to pack in the new Blackwell architecture, which is essentially the same tech powering the world’s most advanced AI data centers.  

Advertisement - scroll for more content

Tesla, on the other hand, pulled a move that might surprise spec-sheet warriors. Teardowns confirm that AI4 is built on Samsung’s 7nm process. This is mature, reliable, and much cheaper than TSMC’s bleeding-edge nodes.

When you look at the compute power, NVIDIA claims a staggering 2,000 TFLOPS for Thor. But there’s a catch. That number uses FP4 (4-bit floating point) precision, a new format designed specifically for the Transformer models used in generative AI.  

Tesla’s AI4 is estimated to hit around 100-150 TOPS (INT8) across its dual-SoC redundant system. On paper, it looks like a slaughter, but Tesla made a very specific engineering trade-off that tells us exactly what was bottling up their software: memory bandwidth.

Tesla switched from LPDDR4 in HW3 to GDDR6 in HW4, the same power-hungry memory you find in gaming graphics cards (GPUs). This gives AI4 a massive memory bandwidth of approximately 384 GB/s, compared to Thor’s 273 GB/s (on the single-chip Jetson config) using LPDDR5X.  

This suggests Tesla’s vision-only approach, which ingests massive amounts of raw video from high-res cameras, was starving for data.

Based on Elon Musk’s comments that Tesla’s AI5 chip will have 5x the memory bandwidth, it sounds like it might still be Tesla’s bottleneck.

Here is where Tesla’s cost-cutting really shows. AI4 is still running on ARM Cortex-A72 cores, an architecture that is nearly a decade old. They bumped the core count to 20, but it’s still old tech.  

NVIDIA Thor, meanwhile, uses the ARM Neoverse V3AE, a server-grade CPU explicitly designed for the modern software-defined vehicle. This allows Thor to run not just the autonomous driving stack, but the entire infotainment system, dashboard, and potentially even an in-car AI assistant, all on one chip.

Thor has found many takers, especially among Tesla EV competitors such as BYD, Zeekr, Lucid, Xiaomi, and many more.

Electrek’s Take

There’s one thing that is not in there: price. I would assume that Tesla wins on that front, and that’s a big part of the project. Tesla developed a chip that didn’t exist, and that it needed.

Top comment by Andrew

Liked by 23 people

I'm not sure I want the infotainment system and other less mission critical stuff running on the same SoC that's handling real important stuff like autonomous driving. It doesn't happen often, but occasionally the Tesla infotainment software crashes, and I like that it doesn't affect the car one bit while driving. I see this only being more of a problem if companies start opening software up to app stores etc. I feel like all software built on to run on that SoC would have to be to the same level of quality as mission critical drive software, which honestly should be closer to like embedded system level of quality...

View all comments

It was an impressive feat, but it doesn’t make Tesla an incredible leader in silicon for self-driving.

Tesla is maxing out AI4. It now uses both chips, making it less likely to achieve the redundancy levels you need to deliver level 4-5 autonomy.

Meanwhile, we don’t have a solution for HW3 yet and AI5 is apparently not coming to save the day until 2027.

By then, there will likely be millions of vehicles on the road with NVIDIA Thor processors.

FTC: We use income earning auto affiliate links. More.

Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.

Comments

Author

Avatar for Fred Lambert Fred Lambert

Fred is the Editor in Chief and Main Writer at Electrek.

You can send tips on Twitter (DMs open) or via email: fred@9to5mac.com

Through Zalkon.com, you can check out Fred’s portfolio and get monthly green stock investment ideas.