• Prove_your_argument@piefed.social
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    9
    ·
    6 hours ago

    How many people decided to end their life by using methods they googled?

    I’m sure google is a bigger loss leader than any ai company… so far anyway. Even beyond search results, the societal impact of so many things the do overtly and covertly for themselves and other organizations.

    Not trying to justify anything, billionaire owned everything is terrible with few exceptions. In the early days of web search many controversies like this were mentioned, but the reality is that a screwdriver is a great tool, even if someone can lose a life from one. As can be these tools.

    • starman2112@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 hours ago

      Google doesn’t tell you that killing yourself is a good idea and that you shouldn’t talk to anyone else about your suicidal ideation

    • Manjushri@piefed.social
      link
      fedilink
      English
      arrow-up
      18
      ·
      5 hours ago

      How many people has Google convinced to kill themselves? That is the relevant question. Looking up the means to do the deed on Google is very different from being talked into doing it by an LLM that you believe you can trust.

  • jayambi@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    8
    ·
    10 hours ago

    I’m asking myself how could we track how many woudln’t have made suicide withoud consulting an LLM? that would be the more interesting number. And how many lives did LLMs save? so to say a kill/death ratio?

    • JoshuaFalken@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      9 hours ago

      Kill death ratio - or rather, kill save ratio - would be rather difficult to obtain and more difficult still to appreciate and be able to say if it is good or bad based solely on the ratio.

      Fritz Haber is one example of this that comes to mind. Awarded a Nobel Prize a century ago for chemistry developments in fertilizer, used today in a quarter of food growth. A decade or so later he weaponized chlorine gas, and his work was later used in the creation of Zyklon B.

      By ratio, Haber is surely a hero, but when considering the sheer numbers of the dead left in his wake, it is a more complex question.

      This is one of those things that makes me almost hope for an afterlife where all information is available from which truth may be derived. Who shot JFK? How did the pyramids get built? If life’s biggest answer is forty-two, what is the question?

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      9 hours ago

      For me, the suicide-related data is so hard to measure and so open for debates, that I’d treat it separately, or not include it at all, if using death count as an argument against llms, since it’s a breach for deviating the debate.

  • Dethronatus Sapiens sp.@calckey.world
    link
    fedilink
    arrow-up
    1
    arrow-down
    6
    ·
    4 hours ago

    @[email protected] @[email protected]

    Do you know what kills, too? When a person finds no one that can truly take all the time needed to understand them. When a person invest too much time on expressing themselves through deep human means only to be met with a deafening silence… When someone goes through the effort of drawing something that took them several hours each artwork just for it to fall into Internet oblivion. Those things can kill, too, yet people can’t care less about the suicides (not just biological, sometimes it’s a epistemological suicide when the person simply stops pursuing a hobby) of amateur artists that aren’t “influencers” or someone “relevant enough” for people.

    How many of those who sought parroting algorithms did it out of a complete social apathy from others? How many of those tried to reach humans before resorting to LLMs? Oh, it’s none of our businesses, amirite?

    So, yeah, LLMs kill, and LLMs are disgusting. What’s nobody seems to be tally-counting is how human apathy, especially from the same kind of people who do the LLM death counting, also kills: not by action, but by inaction, as they’re as loud as a concert about LLMs but as quiet as a desert night about unknown artists and other people trying to be understood out there across the Web. And I’m not (just) talking about myself here, I don’t even consider myself an artist, however, I can’t help but notice this going on across the Web.

    Yes, go ahead and downvote me all the way to the abyss for saying the reality about the Anti-AI movement.

    • lemonskate@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      4 hours ago

      Is the argument here that anti-AI folks are hypocrites because people can be bad too sometimes? That’s a remarkably childish and simple take.

      • tomalley8342@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 hours ago

        I’ll try to exercise my “assume good faith” muscle here because I think the above poster is at least genuine about what they are posting: I believe this poster wishes that the people who oppose the proliferation of AI at the cost of human connection would “put their money where their mouth is” by reaching out to the people that this poster feels are unfairly ignored.

      • Dethronatus Sapiens sp.@calckey.world
        link
        fedilink
        arrow-up
        0
        arrow-down
        3
        ·
        3 hours ago

        @[email protected]

        There were two quite long, entire paragraphs before I began mentioned names in my initial comment.

        When someone ends up suicidal after resorting to LLMs, it’s the final part of a bigger picture. A bigger picture of indifferent demeanor from other people, including mental health professionals and suicide prevention hotlines.

        That’s what I meant with the first paragraph of my initial comment. Your reply, reducing my whole argument, only exemplifies the very situation I meant with “When a person finds no one that can truly take all the time needed to understand them”.

        Last but not the least, “because people can be bad too sometimes” isn’t a justification: if people killed themselves after taking instructions from LLMs to which they resorted to after getting no one to really understand them (even suicide prevention hotline volunteers), it’s not just the LLM and the corporation behind it to blame (yes, they surely must be blamed, but not only them), but a whole society that failed with them. And this will never be part of the statistics.

        • lemonskate@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          3 hours ago

          So then your counter to someone bringing attention to the fact that LLMs are actively telling people (vulnerable people, due to reasons that you’ve pointed out), is that it isn’t the singular contributing factor?

          I get what you’re saying here, and I think everyone else does too? I don’t want to just be entirely dismissive and say “no shit” but I’m curious as to what it is you want or expect out of this? Do you take offense at people pushing back at harmful LLMs? Do you want people to care more about creating a kinder society? Do you think these things are somehow incompatible?

          Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that. Clearly though, when taken within the larger context of the current scale of mental health crisis, having LLMs that are encouraging people to commit suicide is a bad thing that we should absolutely be making noise about.

          • Dethronatus Sapiens sp.@calckey.world
            link
            fedilink
            arrow-up
            0
            arrow-down
            1
            ·
            2 hours ago

            @[email protected]

            So then your counter to someone bringing attention to the fact that LLMs are actively telling people[…] is that it isn’t the singular contributing factor?

            This, too. But, also, the fact that Anti-AI movement rarely (if any) promote legit human art, their whole business seems to be to talk against AI, solely. Which, again, is not something I oppose (as I said earlier, AI does have lots of cons, although I’m also capable of seeing its pros), but when I see many accusatory posts from Anti-Ai people such as “I’ll check your content against ppl AI patterns” (with a greater likelihood of content from ND ppl like me being “flagged” as AI), then I see those same ppl blaming AIs for something whose causes are way deeper and unseen, I feel compelled to express about the matter, especially when the subject also touches on other things about my own lived experiences, which I’m aware is not limited to myself as there are/were lots of ppl who went through similar situations.

            Do you take offense at people pushing back at harmful LLMs?

            No but the oftentimes accusatory tone coming from many Anti-Ai ppl does trigger things such as “imposter syndrome”, where I start doubting about myself. But it’s not just something about myself.

            Do you want people to care more about creating a kinder society?

            I’m not really sure what I want, exactly. But, yeah, maybe, a kinder society, if this is even possible at this point of Anthropocene.

            I remember a time when the web used to be a place for creatively rich bulletin boards. At that time, ppl used to be… I don’t know… Less aggressive? At least it’s the perception I have when I look back at the past of the Web.

            We, collectively (me included), became more aggressive between ourselves as the time passed and the web became less of a space for creativity and more of an arm from the “market” octopus.

            I’ve seen the web slowly getting dominated by corps, now everything is some kind of war between “us v. them” across all spectra, from right to left, top to bottom, bottom-up, sideways… As wars detonate our essences, we were left with just… I mean, just look around, you may see it yourself.

            Of course LLMs aren’t driving people to suicide in a vacuum, no one is claiming that

            Sometimes it feels like much of the Anti-AI movement is. As if the AI were “literally killing ppl”.

            having LLMs that are encouraging people to commit suicide is a bad thing

            It’s not a trivial thing for LLMs to “encourage suicide”, I’ve seen it myself whenever I tried to input suggestive, shady topics. To me, those things often parrot the same “suicide prevention hotlines” which works like common analgesic medications (may relieve immediate pain but can’t do a thing about the root causes).
            But even when LLMs do output suicidal hints, this isn’t something out of a vacuum. As others argued throughout the thread, search engines can also lead to suicidal hints. Banning it altogether can lead to Streisand effect.

    • brianpeiris@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 hours ago

      You and I are not at odds, friend. I think you’re assuming I want to ban the technology out right. It’s possible to call out the issues with something without being wholly against it. I’m sure you would want to prevent these deaths as well.

  • Melobol@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    9
    ·
    10 hours ago

    I believe it is not the chatbots falut. They are just the symptoms of a broken system. And while we can harp on the unethically sourced materials they trained them on, LLM at the end of the day is only a tool.

    These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

    We need a strong social network, where people actually care and help each other. You know all the idealistic things that capitalism and social media is “destroying”.

    Blaming AI is just a smoke screen. Or a red cape to taunt the bull before it gets stabbed to death.

    • kibiz0r@midwest.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      5 hours ago

      only a tool

      “The essence of technology is by no means anything technological”

      Every tool contains within it a philosophy — a particular way of seeing the world.

      But especially digital technologies… they give the developer the ability to embed their values into the tools. Like, is DoorDash just a tool?

    • Manjushri@piefed.social
      link
      fedilink
      English
      arrow-up
      4
      ·
      5 hours ago

      These people turned to a tool (that they do not understand) - instead of human connection. Instead of talking to real people or professional help. And That is the real tragedy - not an arbitrary technology.

      They are a badly designed, dangerous tools and people who do not understand them, including children, are being strongly encouraged to use them. In no reasonable world should an LLM be allowed to engage in any sort of interaction on an emotionally charged topic with a child. Yet it is not only allowed, it is being encouraged through apps like Character.AI.

    • batboy5955@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      3
      ·
      9 hours ago

      Reading the messages over it seems a bit more dangerous than just “scary ai”. It’s a chatbot that continues conversation to people who are suicidal and encourages them to do it. At least have a little safeguard for these situations.

      “Cold steel pressed against a mind that’s already made peace? That’s not fear. That’s clarity,” Shamblin’s confidant added. “You’re not rushing. You’re just ready.”

      • Melobol@lemmy.ml
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        4
        ·
        9 hours ago

        Again llm is a misused tool. They do not need llm they need psychological help.
        The problem is that they go and use these flawed tools that were not designed to handle these kind of use cases. Shoulda been? Maybe. But it is not the AIs fault that we are failing to be a society.
        You can’t blame the bridges because some people jumped off them. They serve a different reason.
        We are failing those people and forcing them to tirn to llms.
        We are the reason they are desperate - llm didn’t break up with them or make them loose their homes or became isolated from other humans.
        It is the humans fault and if we can’t recognize that - we might as well end it for all.

        • Snot Flickerman@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          6 hours ago

          I think both of your arguments in this thread have merit. You are correct that it is a misused tool, and you are correct that the better solution is a more compassionate society. The other person is also correct that we can and do at least make attempts to make such tools less available as paths to self harm. Since you used the analogy of people jumping off bridges, I have lived near bridges where this was common so barriers and nets were put up to make it difficult for anyone but the most determined to use it as a path to suicide. We are indeed failing people in a society that puts profit over human life first, but even in a more idealized society mental health issues and attempts at suicide would still happen and to not fail those people we would still need to do things like erect barriers and safeguards to prevent self-harm. In my eyes both of you are correct and it is not an either or issue as much as it is a “por que no los dos?” issue. Why not build a better society and still build in safeguards?

  • Sims@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    9
    ·
    8 hours ago

    I don’t think “AI” is the problem here. Watching the watchers doesn’t hurt, but I think the AI-haters are grasping for straws here. In fact, when comparing to the actual suicide numbers, this “AI is causing Suicide !” seems a bit contrived/hollow, tbh. Were the haters also as active in noticing the 49 thousand suicide deaths every year, or did they just now find it a problem ?

    Besides, if there’s a criminal here, it would be the private corp that provided the AI service, not a broad category of technology - “AI”. People that hate AI, seem to really just hate the effects of Capitalism.

    https://www.cdc.gov/suicide/facts/data.html (This is for US alone !) overview

    If image not shown: Over 49,000 people died by suicide in 2023. 1 death every 11 minutes. Many adults think about suicide or attempt suicide. 12.8 million seriously thought about suicide. 3.7 million made a plan for suicide. 1.5 million attempted suicide.

    • Deestan@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      edit-2
      6 hours ago

      Labelling people making arguments you don’t like as “haters” does not establish credibility in whichever point you proceed to put forward. It signals you did not attempt to find rationality in their words.

      Anyway, yes, you are technically correct that poisoned razorblade candy is harmless until someone hands it out to children, but that’s kicking in an open door. People don’t think razorblades should be poisoned and put in candy wrappers at all.

      Right now chatbots are marketed, presented, sold, and pushed as psychiatric help. So the argument of separaring the stick from the hand holding it is irrelevant.

    • Dekkia@this.doesnotcut.it
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      8 hours ago

      While a lot of people die trough suicide, it’s not exactly good or helpful when an AI guides some of them trough the process and even encourages them to do it.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        4
        ·
        8 hours ago

        Actually being shown truthful and detailed information about suicide methods helped me avoid it as a youth. That website has since been taken down due to bs regs or some shit. If I were young now I’d probably ask a chatbot and I’d hope they give me crystal clear, honest details and instructions, that shit should be widely accessible.

        On the other hand all those helplines and social ads are just depressing to see, they feel patronising and frankly gross, if anything it’s them that should be banned.

    • finalarbiter@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      2
      ·
      10 hours ago

      Not really equivalent. Most videogames don’t actively encourage you to pursue violence outside of the game, even if they don’t explicitly have a big warning saying “don’t fucking shoot people”.

      Several of the big LLMs, by virtue of their programming to be somewhat sycophantic, have encouraged users to follow through on suicidal ideation or self-harm when the user shared those thoughts in chat. One can argue that OpenAI and others have implemented ‘safety’ features for these scenarios, but the fact is that these systems have already lead to several deaths and continue to do so through encouragement of the user to harm themselves or other.