Quote:
Originally Posted by UNKNOWN_370
|
Autopilot tech is not one I embrace so readily either... here's why:
1. In keeping with your posted links, yes, they are responsible for killing people.
2. Even if they were able to, ya know, not plow into things and murder the person in the car, eventually whatever passes for its AI would have to make a decision on who or what gets hit in an unavoidable collision.
Soooo... say you blow a tire, and coming to a completely safe stop is physically impossible. Now the ECU has to decide whether to plow into a tree, a ditch, another person, etc.
How is that problem to be worked out? How is that decision to be made? What algorithm can be held accountable (or forgivable) for that decision?
This situation would make the "unintended acceleration" problem look like a bad joke.
Whether or not a human driver makes a good vs. bad (or lucky vs. unlucky) maneuver, a human response to an emergency is still likely to be quite different (if not "better") than even a highly sophisticated AI program -- and we aren't quite at that level of tech yet...
By making the emergency response an AI problem, you would need some
very clear, consensus agreed-upon ethical guidelines programmed in at
minimum, but, ultimately, it still means a "robot" would have to decide who lives or dies.
Even Rick Deckard wouldn't be be able to reconcile that one very easily... he'd have to retire the car.
If they can at least get robot cars to stop crashing into things that are otherwise easily avoidable, I'd be more open to the idea... it could prevent a lot of accidents by sleepy/intoxicated/distracted/etc drivers, but first we need to see evidence that such cars won't go out of their way to cause accidents instead of avoiding them.