Austin Rose uses a National Review Online column to urge regulators to avoid an overreaction to the first major setback for self-driving cars.
Tesla’s much-lauded “autopilot” feature is considered by many to be a vision of the future: A properly equipped car that can manage its own speed in traffic, change lanes, steer, and even park without human guidance. The technology has been hailed as a boon to the environment, worker productivity, and traffic congestion, among other things.
Now, it faces its first big test, after a fatal crash in Florida. …
… It is perfectly natural to want to prevent such unfortunate fatalities, but the instinct to do so ought not to be acted on without careful consideration.
There seem to be two types of attempts to intervene in the automated cars market. The first is a broad rejection of the technology: “That man would not have died if he hadn’t been lulled into a false sense of security by an immature technology; we ought to clamp down on the industry until we can be assured of safety for all.” Though this may sound like an obvious straw man for a more nuanced argument, it unfortunately is not.
The simple response to the Luddite position is that a flawed autopilot system on the road is safer than a world with no self-driving vehicles. Autopilot has produced one death in 130 million miles on the road, for a fatality rate substantially lower than the national one and more than 100 percent lower than the global one. Elon Musk, CEO of Tesla, has estimated that universal application of autopilot technology would have saved more than half a million lives last year. If the underlying motive for those who would curtail the autopilot feature is a belief in the value of human life, why aren’t they agitating for the fastest possible expansion of the technology, bugs and all?