Tesla filed new comments with the California Public Utilities Commission that amount to a quiet admission: its “Robotaxi” service still relies on both in-car human drivers and domestic remote operators to function. Rather than downplaying these dependencies, Tesla leans into them — arguing that its multi-layered human supervision model is more reliable than Waymo’s fully driverless system, pointing to the December 2025 San Francisco blackout as proof.
The filing, submitted February 13 in CPUC Rulemaking 25-08-013, reveals the massive operational gap between what Tesla calls a “Robotaxi” and what Waymo actually operates as one.
Tesla’s system: drivers in the car, operators on standby
The filing makes clear just how many layers of human involvement Tesla’s ride-hailing service still requires. Tesla operates its service using TCP (Transportation Charter Party) vehicles equipped with FSD (Supervised), a Level 2 ADAS system that, by definition, requires a licensed human driver behind the wheel at all times, actively monitoring and ready to intervene.
On top of that in-car driver, Tesla describes a parallel layer of remote operators. The company states it employs domestically located remote operators in both Austin and the Bay Area, and that these operators are subject to DMV-mandated U.S. driver’s licenses, “extensive background checks and drug and alcohol testing,” and mandatory training. Tesla frames this as a redundancy system, remote operators in two cities backing up the in-car drivers.
That’s two layers of human supervision for a service Tesla markets as a “Robotaxi.”
Compare that to Waymo. Waymo’s vehicles have no driver in the car. Waymo uses remote assistance operators who can provide guidance to vehicles in ambiguous situations, but the vehicle drives itself. Waymo’s remote operators don’t control the car, they confirm whether it’s safe to proceed in edge cases like construction zones or unusual road conditions.
The difference is fundamental. Tesla’s system requires a human to drive the car and has remote operators as backup. Waymo’s system drives itself and has remote operators as backup. Tesla is essentially describing a staffing-intensive taxi service with driver-assist software. Waymo is describing an autonomous transportation network.
The blackout: Tesla’s strongest — and most misleading — argument
Tesla opens its filing by pointing to the December 20, 2025, San Francisco power outage. During the blackout, Waymo’s AVs began requesting confirmation from remote assistance operators to verify it was safe to proceed through darkened intersections. The volume of requests overwhelmed Waymo’s remote assistance team, and vehicles stopped in traffic lanes and intersections.
Tesla states its own ADAS-equipped TCP vehicles “were not impacted by the outage and completed all rides that day without interruption.” The reason is obvious: Tesla had human drivers behind the wheel who could navigate darkened intersections the way any human driver would.
This is technically true, and it’s also exactly the point. Tesla’s vehicles weren’t affected because they aren’t autonomous. A human was driving. Tesla referred to the humans behind the wheels of its “Robotaxi” service in the Bay Area as “drivers” half a dozen times in the new CPUC filing.
That’s not a technological advantage; it’s a concession that Tesla’s system can’t really be compared to Waymo’s.
Waymo acknowledged the December 20 incident was a resourcing failure, its remote assistance team couldn’t keep up with the surge of requests. It was a bad day for Waymo’s operations, no question. But Waymo’s fleet handles 450,000 fully driverless rides per week across six cities. Tesla’s Austin operation, eight months in, runs roughly 42 vehicles with below 20% availability and is still almost entirely human supervised.
To use the SAE autonomous driving “levels”, Waymo operates with a level 4 system, while Tesla itself retierates that it is using a level 2 system:
In the absence of such a request, however, Tesla does not support requiring drivers, TNCs, or TCPs to obtain or reaffirm affirmative consent from a passenger prior to engaging a Level 2 technology while in passenger service. Under the SAE Level 2 taxonomy, the driver’s role specifically includes “[d]etermin[ing] whether/when engagement and disengagement of the driving automation system is appropriate” in a given situation. Requiring drivers to request and receive or reobtain affirmative passenger consent prior to engaging an ADAS would be antithetical to the functionality of a Level 2 system. The decision to engage a Level 2 system should remain within the province of the driver and should not be contingent upon obtaining the passenger’s consent.
Much of the filing revolved around Tesla pushing back against Waymo suggesting that Tesla shouldn’t refer to its system as “self-driving” or “robotaxi” and that the drivers should communicate clearly to customers the control conditions.
The marketing contradiction Tesla can’t escape
The filing also reveals the bind Tesla has put itself in on marketing. Tesla argues forcefully that its Level 2 ADAS vehicles should remain outside the scope of this AV rulemaking entirely, agreeing with Lyft that they aren’t “autonomous vehicles” under California law.
At the same time, Tesla is fighting Waymo’s proposal to prohibit Level 2 services from using terms like “driverless,” “self-driving,” or “robotaxi.” Tesla calls this proposal “wholly unnecessary,” arguing that existing California advertising laws already cover misleading marketing.
But read those two positions together: Tesla is telling regulators its vehicles are not autonomous and require human drivers, while simultaneously fighting for the right to keep calling the service a “Robotaxi.” Tesla wants the legal protections of being classified as a supervised Level 2 system and the marketing benefits of sounding like a fully autonomous one.
A California judge already ruled in December 2025 that Tesla’s marketing of “Autopilot” and “Full Self-Driving” violated the state’s false advertising laws.
Tesla also pushes back on Waymo’s proposal to require per-ride rider consent for Level 2 ADAS trips, arguing customers already consent when they sign up for the app and that requiring per-trip reaffirmation would be “antithetical to the functionality of a Level 2 system.” Under SAE definitions, the driver — not the passenger — decides when to engage ADAS.
Tesla’s safety claims remain unverified
In the filing, Tesla claims that when FSD (Supervised) is engaged, a driver is “seven times less likely to be involved in an accident,” citing its own Vehicle Safety Report.
Tesla’s self-reported safety data has been widely criticized for comparing primarily highway ADAS miles against the national average across all road types, and for only counting crashes that trigger airbags or seatbelt pretensioners. Tesla has never released comprehensive disengagement data for FSD, the kind of data Waymo publishes regularly and that the DMV requires of autonomous vehicle permit holders.
That’s another consequence of Tesla’s Level 2 classification: because it’s not an “autonomous vehicle” under California law, Tesla isn’t subject to the same reporting requirements as Waymo. It gets to make safety claims without submitting the data that would verify them.
Electrek’s Take
This regulatory spat between Tesla and Waymo is proving to be quite revealing.
Top comment by Arty
"In its own words, Tesla describes a service that requires a trained human driver in every vehicle, backed by teams of remote operators, using a Level 2 driver-assist system that legally requires constant human supervision"
Infinitely scalable!
This filing is the clearest illustration yet of the gap between Tesla’s “Robotaxi” marketing and its actual operations. In its own words, Tesla describes a service that requires a trained human driver in every vehicle, backed by teams of remote operators, using a Level 2 driver-assist system that legally requires constant human supervision. Then it calls it a Robotaxi.
Waymo operates actual driverless vehicles at a scale of hundreds of thousands of rides per week. Tesla operates a supervised ride-hailing service with about 42 cars in Austin and runs a ride-hailing service with drivers using FSD (Level 2) in the Bay Area. These are not competing approaches to the same problem, they are fundamentally different products at fundamentally different stages of development.
The blackout argument is revealing because of what Tesla isn’t saying. Yes, having a human driver makes you more resilient to power outages. It also means you haven’t solved autonomous driving. Tesla is framing a limitation as an advantage, and the CPUC filing makes that framing explicit in a way that Tesla’s marketing carefully avoids.
We’ll be watching to see how the Commission handles the marketing question. Tesla wants to be regulated as Level 2 and marketed as autonomous. At some point, it has to pick one.
FTC: We use income earning auto affiliate links. More.
Comments