Skip to main content

Understanding the fatal Tesla accident on Autopilot and the NHTSA probe

tesla truck accident 3

The first reported death in a Tesla Model S crash while the Autopilot was activated has been shaking the Tesla and self-driving car community since yesterday. The tragic accident happened May 7th in Florida, but we only learned about it yesterday when Tesla revealed that the U.S. National Highway Traffic Safety Administration (NHTSA) launched preliminary evaluation in Tesla’s Autopilot system.

We don’t pretend to know everything about the accident, but based on the information released by the Florida Highway Patrol, Tesla and NHTSA, we try to convey our best understanding of the events and the possible impact of the regulator’s probe on the Autopilot.

On May 7th at 3:40 p.m. on U.S. in Williston, Florida, 45-year-old Joshua Brown was killed when his Tesla Model S went under the trailer of an 18-wheel semi and the roof of his car was torn off by the impact.

The Florida Highway Patrol described the events:

When the truck made a left turn onto NE 140th Court in front of the car, the car’s roof struck the underside of the trailer as it passed under the trailer. The car continued to travel east on U.S. 27A until it left the roadway on the south shoulder and struck a fence. The car smashed through two fences and struck a power pole. The car rotated counter-clockwise while sliding to its final resting place about 100 feet south of the highway.

Here’s our birds-eye visualization of what happened based on the information released by the police:tesla truck accident 3

Tesla added its own understanding of the events in a blog post:

“What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S.”

The fact that the Autopilot system didn’t detect the trailer as an obstacle prompting emergency braking or steering is what is worrying a lot of people. The forward facing sensors of the Autopilot consist of a camera, a radar and a few ultrasonic sensors.

It’s understandable that the camera couldn’t detect the trailer as an obstacle based on Tesla’s explanation of the trailer’s “white color against a brightly lit sky” and the “high ride height”, but what is less understandable is why the front facing radar didn’t detect it.

Tesla CEO Elon Musk offered an explanation:

Our understanding here is that the high ride height of the trailer confused the radar into thinking it is an overhead road sign. It’s obviously not ideal and the system should be refined to have a greater detection threshold for overhead road signs, but at the end of the day, the reason why the Autopilot didn’t stop the Model S is not really the matter at hand here.

NHTSA’s preliminary evaluation in Tesla’s Autopilot is to determine if the system worked according to expectations, but the Autopilot is not expected to avoid these types of accidents. It is a semi-autonomous system expected to reduce the driver’s workload, but the driver is still responsible just like a regular cruise control:

When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.”

One thing NHTSA could take issue with is if it finds that Tesla and the Autopilot worked to create a false sense of security that led owners to overestimate the system’s capacity.

Tesla has been criticized in the past for this alleged “false sense of safety”. Google Deep Learning Founder Andrew NG called Tesla’s Autopilot ‘irresponsible’ after an accident in Switzerland where a Model S driver hit a van after thinking the Autopilot would stop.

Was this potential false sense of safety a factor in the fatal accident? It’s not clear at the moment and we will not speculate, but here are the facts so far. A month prior to his accident, Joshua Brown credited the Autopilot system for saving him in near miss caught on video.

The truck driver involved in the crash claims that Brown was watching a movie during the accident and the police confirmed that they found a portable DVD player in the car, but they didn’t say if it was in use during the accident. It’s also worth considering that while the Autopilot didn’t activate the brake, neither did Brown.

Regardless if Brown was or wasn’t distracted during the accident, which we will not know until all the information become public, one thing we know for sure is that he was very aware of the system’s limits and the danger of becoming too comfortable with it. He wrote in a YouTube comment following the posting of one of his videos:

“A bigger danger at this stage of the development is getting someone too comfortable. You really do need to be paying attention at this point. This is early in the development and the human should be ready to intervene if [the Autopilot] can’t do something. I talked in one of the other comments about the blind spots of the current hardware. There are some situations it doesn’t do well in which is okay. It’s not an autonomous car and they are learning HUGE amounts of data about the car doing the driving. I’m happy o help train it. I’m VERY curious what version 2 of the hardware will be like and what [it] will enable.”

It looks like Brown didn’t agree with Ng asking Tesla not to ship the Autopilot because of the potential false sense of security. It’s hard to decide whether or not to stop tens of thousands of Tesla owners from using a system which is both useful and can increase safety if properly used, just because a few people could potentially abuse the system.

As Tesla pointed out in its blog post about the tragic accident, this is “the first known fatality in just over 130 million miles where Autopilot was activated” – compared a fatality every 94 million miles in the US and one every 60 million miles worldwide. Statistically speaking, the Autopilot is still safer than the average vehicle.

The NHTSA probe is still only in a preliminary review stage, which is the first step before a complete investigation which only then can lead to a recall. It also doesn’t necessarily mean that there will be an investigation.

FTC: We use income earning auto affiliate links. More.

Stay up to date with the latest content by subscribing to Electrek on Google News. You’re reading Electrek— experts who break news about Tesla, electric vehicles, and green energy, day after day. Be sure to check out our homepage for all the latest news, and follow Electrek on Twitter, Facebook, and LinkedIn to stay in the loop. Don’t know where to start? Check out our YouTube channel for the latest reviews.

Comments

Author

Avatar for Fred Lambert Fred Lambert

Fred is the Editor in Chief and Main Writer at Electrek.

You can send tips on Twitter (DMs open) or via email: fred@9to5mac.com

Through Zalkon.com, you can check out Fred’s portfolio and get monthly green stock investment ideas.