That's an entirely different topic worthy of discussion on its own. So far, no robot has been given the ability to pull the trigger of its own decision. It's always a handler doing so. As tech evolves, this will remain an open question about where we draw the line. Some have argued, and I tend to agree, that humans are unwilling to give the ability to fire over to a robot. Besides the moral reasons, there's the liability reason. We're seeing the liability questions rise with Tesla's autopilot and similar systems where using the autonomous driving features have resulted in human deaths. Who is at fault? The driver which didn't intervene to stop the car? The programmer for failing to make the car not see the problem and respond accordingly? The manufacturer of the car for not providing the necessary situational awareness to the software to make a judgement on reaction? The company itself, for expecting the customer to trust the autopilot system except when it results in fatalities? I haven't really looked at those case laws but as autopilot systems get more and more advanced to the point there aren't even steering wheels in cars anymore, this issue becomes much more challenging to address.
__________________
The nine most terrifying words in the English language are: I'm from the government and I'm here to help. --Ronald Reagan
Last edited by FordGT90; 07-30-2020 at 08:41 AM.
|