I feel like you’re missing the point.
They’re not saying to jail computers, they’re saying be ware of political leaders using computers to abdicate responsibility.
We shut down companies for it though, and what AI vendors are doing is basically selling the ability to turn job roles into “accountability sinks”, where your true value is in taking the fall for AI when it gets it wrong (…enough that someone successfully sues).
If you want to put it in gun terms: The AI vendors are selling a gun that automatically shoots at some targets but not others. The targets it recommends are almost always profitable in the short term, but not always legal. You must hire a person to sit next to the gun and stop it from shooting illegal targets. It can shoot 1000 targets per minute.
We also don’t give the murderer a free pass because they used a gun.
A tool is a tool, and the person who designed it or used it is responsible depending on why it caused a negative outcome. I know you clarified it later but it is so stupidly obvious I wanted to add to it.
The gun isn’t running software in the background when humans are away either. See my other comment, when shit goes sideways, blame the programmers, engineers, and now the CEOs that decided to jam screwy AI up our collective asses…
So when a kid commits suicide because the Generative AI LLM agreed with him in a harmful way?
Edit: In before someone says something about how the gun manufacturers still shouldn’t be held accountable.
Gen AI LLM’s in this instance are products working as intended/designed, and are being used in a way that the manufacturer knows is harmful and admits is damaging. They also admit that there are no laws to safeguard persons against how the AI is designed, implemented etc and these things don’t even have warning labels.
Guns by contrast have lots of laws involving how and where they can be sold and accessed, as well as by whom, and with respect to informing the user of the dangers. You don’t sign a EULA or a TOS when you buy a gun, waiving your rights to sue. You don’t agree to only arbitration.
I’ll agree with you there, I shot myself in the arm when I was only 3 with a pellet gun. My dad realized his mistake and kept all guns away from me, until age 10, when he took me out to shoot some bottles and cans, and teach me proper gun safety.
Yes that might have been an earlier childhood lesson than many parents might agree to, but he was proper about what and when he taught me. Like, aside from the obvious of keep the gun on safety and never point it at anyone or anything unless you intend to use it, who thinks of things like, don’t lean on a rifle with the barrel in the dirt? The dirt can and will clog the barrel and cause the gun to explode!
Anyways, back on point of AI…
Most parents aren’t just up and giving their kids guns, but major corporations are shoving this AI shit up everyone’s asses, as much as they can anyways, knowing good and well that one AI model says 1+2+3=15 and another AI model is suggesting people suffering pain to use heroin…
So what’s the answer, avoid AI? Well fuck Google then…
You don’t get that option with AI these days if you simply want to Google something, they force you to use it. Google, Alexa, Siri, fuck our own US government is now using Grok!
We don’t jail the gun for murder.
I feel like you’re missing the point.
They’re not saying to jail computers, they’re saying be ware of political leaders using computers to abdicate responsibility.
We shut down companies for it though, and what AI vendors are doing is basically selling the ability to turn job roles into “accountability sinks”, where your true value is in taking the fall for AI when it gets it wrong (…enough that someone successfully sues).
If you want to put it in gun terms: The AI vendors are selling a gun that automatically shoots at some targets but not others. The targets it recommends are almost always profitable in the short term, but not always legal. You must hire a person to sit next to the gun and stop it from shooting illegal targets. It can shoot 1000 targets per minute.
Sounds like a fun job if the acceptable failure rate is like, 50%
We also don’t give the murderer a free pass because they used a gun.
A tool is a tool, and the person who designed it or used it is responsible depending on why it caused a negative outcome. I know you clarified it later but it is so stupidly obvious I wanted to add to it.
deleted by creator
No, I agreed with you in a slightly different way.
Great reading comprehension.
deleted by creator
The gun isn’t running software in the background when humans are away either. See my other comment, when shit goes sideways, blame the programmers, engineers, and now the CEOs that decided to jam screwy AI up our collective asses…
We don’t jail gun manufacturers either.
When a tool is used to kill a human, the user of the tool is guilty.
So when a kid commits suicide because the Generative AI LLM agreed with him in a harmful way?
Edit: In before someone says something about how the gun manufacturers still shouldn’t be held accountable.
Gen AI LLM’s in this instance are products working as intended/designed, and are being used in a way that the manufacturer knows is harmful and admits is damaging. They also admit that there are no laws to safeguard persons against how the AI is designed, implemented etc and these things don’t even have warning labels.
Guns by contrast have lots of laws involving how and where they can be sold and accessed, as well as by whom, and with respect to informing the user of the dangers. You don’t sign a EULA or a TOS when you buy a gun, waiving your rights to sue. You don’t agree to only arbitration.
If a child shot themself, I’d blame the parents.
Keep 'em coming. I can do this all day.
I’ll agree with you there, I shot myself in the arm when I was only 3 with a pellet gun. My dad realized his mistake and kept all guns away from me, until age 10, when he took me out to shoot some bottles and cans, and teach me proper gun safety.
Yes that might have been an earlier childhood lesson than many parents might agree to, but he was proper about what and when he taught me. Like, aside from the obvious of keep the gun on safety and never point it at anyone or anything unless you intend to use it, who thinks of things like, don’t lean on a rifle with the barrel in the dirt? The dirt can and will clog the barrel and cause the gun to explode!
Anyways, back on point of AI…
Most parents aren’t just up and giving their kids guns, but major corporations are shoving this AI shit up everyone’s asses, as much as they can anyways, knowing good and well that one AI model says 1+2+3=15 and another AI model is suggesting people suffering pain to use heroin…
So what’s the answer, avoid AI? Well fuck Google then…
They knew what I meant and chose to ignore the meaning to avoid the question.
Why can guns be something that can be responsibly used but AI cannot?
One can simply decide to never use a gun.
You don’t get that option with AI these days if you simply want to Google something, they force you to use it. Google, Alexa, Siri, fuck our own US government is now using Grok!
Is the gun in the metaphor held to your head? Google… Or else?
deleted by creator
You are so close to getting it…
Okay, I’ll bite.
What am I missing here NewPerspective?
When a human dies because a tool was designed with needless danger, the manufacturer is often prosecuted.
But again, I think you’re missing the point.
Be a lot cooler if you did
When guns have no legal uses, this is a direction we should go. Until then, this holds people accountable for other people misusing their product.
In a chain of responsibility, the last person who’s accountable should be punished, not the first.
deleted by creator
Blame the programmers? Yeah, no. The software is owned by the company, blame them.
So, I think that the open source developers should file a class action lawsuit for stealing their code.
Go ahead, ask Linus Torvalds, I bet he’s not exactly happy with the current trajectory…