The thing with taking responsibility is that it isn’t actually about punishing a potential maldoer.
It’s to ensure that a safe outcome is guaranteed (as much as realistically possible). If you have a fire-proof door that automatically seals itself air-tight in case of a fire and stops the fire that way, that door is considered responsible too. Even though it doesn’t have a single living cell in it.
“A computer can never be held accountable, therefore a computer must never make a management decision.”
– IBM Training Manual, 1979
We’re going so backwards…
The thing with taking responsibility is that it isn’t actually about punishing a potential maldoer.
It’s to ensure that a safe outcome is guaranteed (as much as realistically possible). If you have a fire-proof door that automatically seals itself air-tight in case of a fire and stops the fire that way, that door is considered responsible too. Even though it doesn’t have a single living cell in it.
A computer’s inability to be held accountable is a key feature for those wishing to use AI for nefarious purposes.