• besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    1
    ·
    24 hours ago

    I still don’t understand what Anthropic is trying to achieve with all of these stunts showing that their LLMs go off the rails so easily. Is it for gullible investors? Why would a consumer want to give them money for something so unreliable?

    • Catoblepas@piefed.blahaj.zone
      link
      fedilink
      English
      arrow-up
      49
      ·
      edit-2
      24 hours ago

      I think part of it is that they want to gaslight people into believing they have actually achieved AI (as in, intelligence that is equivalent to and operates like that of a human’s) and that these are signs of emergent intelligence, not their product flopping harder than a sack of mayonnaise on asphalt.

    • LostWanderer@fedia.io
      link
      fedilink
      arrow-up
      3
      ·
      15 hours ago

      A fool is ever eager to give their money to that which doesn’t work as intended. Provided the surrounding image provides a mystique or resonates with their internal vision of what an ‘AI’ is. It’s pure marketing on their part, Anthropic believes that any press is good press. It makes investors drool over a refined AI, even though, Apple themselves have proven it through their many technical papers current AI is merely ‘smoke and mirrors’ however…For some odd reason, they are still developing their ‘Apple Intelligence’. They are huffing farts just as much as Anthropic is, they have to constantly pull stunts to gaslight their investors into believing that ‘AI’ is going to become a viable product that will make money. Or allow them to get rid of human workers, so their bottom line looks flush (spoiler alert, they have to rehire people, as AI can’t do many of the things a live person with training can).

      There reason why this shit is shoved in everything is because it doesn’t have good general use cases and the collection of usage data from people. Most people don’t give money to AI companies, only those who have drank the Kool-Aid do, as they are hope-posting and gaslighting people into believing the current or future capabilities of ‘AI’. LLMs are really great at specific things, collating fine-tuned databases and making them highly searchable by specialists in a field. However, as always the techbros always want to do too much, they need to make a ‘wonder tool’ that inevitably fails and then these lying techbros need to quickly figure out the next scam.

    • cubism_pitta@lemmy.world
      link
      fedilink
      English
      arrow-up
      16
      ·
      edit-2
      24 hours ago

      People who don’t understand and read these articles and think Skynet. People who know their buzz words think AGI

      Fortune isn’t exactly renowned for its Technology journalism

    • audaxdreik@pawb.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      edit-2
      23 hours ago

      The latest We’re In Hell revealed a new piece of the puzzle to me, Symbolic vs Connectionist AI.

      As a layman I want to be careful about overstepping the bounds of my own understanding, but from someone who has followed this closely for decades, read a lot of sci-fi, and dabbled in computer sciences, it’s always been kind of clear to me that AI would be more symbolic than connectionist. Of course it’s going to be a bit of both, but there really are a lot of people out there that believe in AI from the movies; that one day it will just “awaken” once a certain number of connections are made.

      Cons of Connectionist AI: Interpretability: Connectionist AI systems are often seen as “black boxes” due to their lack of transparency and interpretability.

      Transparency and accountability are negatives when being used for a large number of applications AI is currently being pushed for. This is just THE PURPOSE.

      Even taking a step back from the apocalyptic killer AI mentioned in the video, we see the same in healthcare. The system is beyond us, smarter than us, processing larger quantities of data and making connections our feeble human minds can’t comprehend. We don’t have to understand it, we just have to accept its results as infallible and we are being trained to do so. The system has marked you as extraneous and removed your support. This is the purpose.


      EDIT: In further response to the article itself, I’d like to point out that misalignment is a very real problem but is anthropomorphized in ways it absolutely should not be. I want to reference a positive AI video, AI learns to exploit a glitch in Trackmania. To be clear, I have nothing but immense respect for Yosh and his work writing his homegrown Trackmania AI. Even he anthropomorphizes the car and carrot, but understand how the rewards are a fairly simple system to maximize a numerical score.

      This is what LLMs are doing, they are maximizing a score by trying to serve you an answer that you find satisfactory to the prompt you provided. I’m not gonna source it, but we all know that a lot of people don’t want to hear the truth, they want to hear what they want to hear. Tech CEOs have been mercilessly beating the algorithm to do just that.

      Even stripped of all reason, language can convey meaning and emotion. It’s why sad songs make you cry, it’s why propaganda and advertising work, and it’s why that abusive ex got the better of you even though you KNEW you were smarter than that. None of us are so complex as we think. It’s not hard to see how an LLM will not only provide sensible response to a sad prompt, but may make efforts to infuse it with appropriate emotion. It’s hard coded into the language, they can’t be separated and the fact that the LLM wields emotion without understanding like a monkey with a gun is terrifying.

      Turning this stuff loose on the populace like this is so unethical there should be trials, but I doubt there ever will be.