Source: Wired
This incident exposes a critical gap in autonomous vehicle deployment: the difference between solving technical problems in controlled environments and adapting to real-world legal and safety requirements that humans take for granted. The months-long failure to implement a basic traffic law reveals that AI systems don’t naturally “understand” context or hierarchy of safety rules—they require explicit, painstaking retraining for each edge case, suggesting self-driving cars may need far more human oversight during deployment than the industry has acknowledged. This pattern will likely repeat across jurisdictions and scenarios until the industry fundamentally rethinks how it validates safety-critical behaviors before public launch, not after.