A waymo vehicle was spotted blocking ambulance traffic in downtown Austin.

On Sunday, March 1, 2026, one of autonomous taxi company Waymo’s vehicles was filmed blocking an ambulance in Downtown Austin following the tragic 6th street shooting. The vehicle blocked emergency traffic for 1 to 2 minutes, and law enforcement told local news that the delay had no impact on treatment.

But it easily could have.

Strokes occur when a blood clot stops blood from reaching the brain. Blood carries oxygen, and without it, vital tissues, including the brain, begin to die. A 10 minute delay in treatment for a stroke could result in the death of roughly 19 million neurons – this loss in brain matter is why many stroke victims face facial sagging from partial paralysis or dips in cognative ability.

In even more catastrauphic scenarios, the slight delay in medical response can mean the difference between a close call and death.

On Sunday, there was no stroke victim, and police were luckily able to override Waymo’s systems to steer the vehicle out of the way. But this incident is just one in a line of incidents that have demonstrated Waymo’s inability to regularly operate safely on city streets. In February, one of the company’s cars was filmed passing school buses that had their stop sign extended. Combined with the recent revelation that Waymo actually uses remote workers in the Phillipines to “help guide” its driverless cars, these incidents showcase serious problems not just with Waymo, but autonomous vehicles in general, and raise major questions related to liability.

Simple Human Reasoning

Since its debut in 2019, Waymo has received criticism for not answering what many consider the ultimate self-driving vehicle quandry: “Whose life is more important, the driver or the pedestrian?”

Waymo has thus far avoided answering this question (for many obvious reasons). On Sunday, the vehicle froze as it attempted to conduct a U-turn while avoiding harming pedestrians, damaging property, or violating traffic laws.  While it is true that some drivers freeze when confronted with the unknown, most humans in this situation would at the very least have understood the police’s instructions to move, or completed a technically illegal manouver to get out of the way (like pulling onto the sidewalk). However, because of the way the Waymo’s systems operate, this “simple deduction” is just not available. Artificial Intelligence systems are a system of balanced weights – and on Sunday, the car’s system got stuck in the middle.

Unfortunately, when half-ton chunks of metal get “stuck in the middle,” it’s usually a human that pays the price.

Negligence, Product Liability and Owning Mistakes

In a typical car wreck, the process is straightforward: facts first, fault second, money third. The facts are established using available evidence, legal fault is determined by reconstructing what happened, and money is assigned to the injured party based on Texas’ comparative fault system.

With autonomous vehicles, however, the discussion shifts to product liability and “operational negligence.” In other words, no individual human is usually considered responsible for what can be the loss of human life, and instead blame is assigned to software decision making, hardware limitations, fleet monitoring and other such issues. There is no single federal liability standard for AV crashes. The reason this contrast is so important is that it highlights the difficulty in determining fault for AV crashes.

Contact UsContact Us

Ultiamtely, autonomous vehicles cause wrecks, hit pedestrians, fail real world tests, violate traffic laws, and have the potential to impact emergency responses, but they aren’t taken forcibly taken off the road while the issue is resolved or further testing is completed. Waymo’s software recall in December was voluntary and only accounted for roughly 3,000 vehicles. Waymo maintains that their vehicles have 91% fewer crashes with serious injuries and 92% fewer crashes with pedestrian injuries, according to NPR.

To be clear: Waymo’s vehicles are an absolute miracle of engineering – the systems for ingesting, digesting and using data embedded into each Waymo vehicle are nothing short of amazing. But there is a serious problem with the tech industry at large, which appears content to use the general public for testing purposes. Tech companies do not seem willing to recall and shut down operations of their products, even when it becomes clear that it endangers the public.

OpenAI and Waymo: Two Sides of the Same Coin

OpenAI has been in litigation over its possible role in the suicide of multiple teenagers since August of 2025. The chatbot has also been linked to heightened depression, withdrawel and a new conditioned called “AI psychosis.” Just like Waymo, OpenAI released a product into the world without fully understanding the impact it would have. Just like Waymo, OpenAI used the public to test its new product.

However, when those injured seek compensation, the companies involved obfuscate the blame by pointing to traditional product liability cases and skirting responsibility by blaming the end user. There are many similar questions between Waymo and OpenAI when it comes to personal injury cases:

  • What happens if the bot tells me to do something harmful?
  • What happens if the car decides to suddenly brake and results in an accident?
  • What happens if the bot provides the wrong medical advice, resulting in a worsening injury?
  • What happens if the car blocks EMS vehicles, worsening the caller’s injury?

The purveiling big tech business model for over a decade has been to “move fast and break things.” In theory, that means developing and iterating quickly to stay ahead of the competition. But when the product in question can be directly responsible for a person’s safety, that motto becomes more a sign of negligence and willingness to hurt customers than to innovate. Waymo has moved fast indeed, and now spans over 10 cities, including in Texas, Florida, Georgia and Arizona. But the ride along the way has not been smooth, and just like OpenAI, Waymo has shown a willingness to allow its product to cause damage to the public’s vehicles and safety, even if it isn’t by design.

Conclusion

The issue here is not that autonomous vehicles make mistakes. Humans are not perfect drivers, and no product is 100% reliable. However, the fact is that human lives are put directly at risk when vehicles like Waymo, Tesla and other Full Self Driving (FSD) systems are tested on public streets. And when these vehicles cause crashes or delay traffic (including emergency traffic), the response is usually just a slap on the wrist to the company. Human drivers (who have not consented) are used as a test group for autonomous vehicles and regularly injured as a result.

Personal injury cases are already complicated. They become even more complicated when we have to track down the responsible company and then litigate partial fault between hardware limitations and software bugs. AV cases become harder to reach full compensation as well, given that law firms now have to hire digital forensics experts, programming experts, and others knowledgeable enough to speak on autonomous vehicles.

If you have been injured in an autonomous vehicle accident involving Waymo, Tesla, Nuro, Wayve, or any other company offering FSD or AV taxi services, you need to contact an experienced personal injury attorney today. Hilda Sibrian has served the clients of Houston, Texas for over 22 years for auto accidents, semi-truck accidents and other injuries suffered as a pedestrian or on the road. Hilda Sibrian serves the Houston metropolitan area, including Sugar Land, Missouri City, La Porte, Beaumont, Pasadena, The Woodlands, The Heights, Bellaire, Kingwood, Baytown and of course Houston proper.

Call our office today or fill out our online contact form for a free consultation.