• skisnow@lemmy.ca
    link
    fedilink
    English
    arrow-up
    18
    ·
    3 days ago

    It’s a fucking chatbot. We should be worried when it can’t be persuaded to say mad things.

    I’m willing to bet that at least half of these “look at the mad things Grok said” pieces were planted by Elmo’s own media team.

  • Adulated_Aspersion@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Suggesting?

    Implies, intones, propose, advise, hint, intimate, indicate, evoke, convey, evince, express, impart, imply, inform…

    Declare it, Groky.

  • Rob T Firefly@lemmy.world
    link
    fedilink
    English
    arrow-up
    61
    ·
    4 days ago

    A chatbot is not capable of doing something so interesting as “going rogue.” That expression implies it’s a mind with agency making a choice to go against something, and this program doesn’t have the ability to do such a thing. It’s just continuing to be the unreliable bullshit machine the tech will always be, no matter how much money and hype continues to be pumped into it.

    • merc@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      3 days ago

      Yes, any journalist who uses that term should be relentlessly mocked. Along with terms like “Grok admitted” or “ChatGPT confessed” or especially any case where they’re “interviewing” the LLM.

      These journalists are basically “interviewing” a magic 8-ball and pretending that it has thoughts.

      • Rob T Firefly@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 days ago

        Seriously. They may as well be interviewing a flipping coin, and then proclaiming that it “admitted” heads.

      • gwl@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        32
        ·
        4 days ago

        No. They haven’t.

        Some dipshit deleted guard rails that stopped it from hallucinating things that are anti-GOP

      • CheeseNoodle@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        So you’re getting a lot of downvotes and I want to try and give an informative answer.

        Its worth noting that a most (it not all) of the people talking about AI being super close to exponential improvement and takeover are people who own or work for companies heavily invested in AI. There’s talk/examples of AI lying or hiding its capabilities or being willing to murder a human to acheive a goal after promising not to. These are not examples of deceit these are simply showcasing that an LLM has no understanding of what words mean or even are, to it they are just tokens to be processed and the words ‘I promise’ hold exactly the same level of importance as ‘Llama dandruff’

        I also don’t want to disparage the field as a whole, there are some truly incredible expert systems which are basically small specialized models using a much less shotgun approach to learning compared to LLMs that can achieve some truly incredible things with performance requirements you could even run on home hardware. These systems are absoloutely already changing the world but since they’re all very narrowly focussed and industry/scientific-field specific they don’t grab headlines likes LLMs do.

        • WorldsDumbestMan@lemmy.today
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          Fair, and nuanced. With some coding magic, I can in theory, chain these “demons”, to work within certain parameters, and judge the results myself.

      • Leomas@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        ·
        4 days ago

        How would you discern between something having agency and a black box mirroring Twitter discourse?

      • merc@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        8
        ·
        3 days ago

        No, they haven’t. They’re effectively prop masters. Someone wants a prop that looks a lot like a legal document, the LLM can generate something that is so convincing as a prop that it might even fool a real judge. Someone else wants a prop that looks like a computer program, it can generate something that might actually run, and one that will certainly look good on screen.

        If the prop master requests a chat where it looks like the chatbot is gaining agency, it can fake that too. It has been trained on fiction like 2001: A Space Odyssey and Wargames. It can also generate a chat where it looks like a chatbot feels sorry for what it did. But, no matter what it’s doing, it’s basically saying “what would an answer to this look like in a way that might fool a human being”.

    • nightlily@leminal.space
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      2 days ago

      Joking about how right wing cis women are „actually“ just men in drag, yeah that’ll own the right and not actually just alienate trans people /s

      • zbyte64@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Joking about how right wing cis transphobic women are „actually“ just men in drag

        FTFY

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    148
    arrow-down
    2
    ·
    4 days ago

    I mean… now that Grok mentions it, they do look kinda similar…

    • Demdaru@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      3 days ago

      For once? Every time xAI doesn’t manage to wrangle this little, amazingly rebelious piece of software down it goes to town on US right lol. If anything, the fact all grew to it being subdued all the time is kinda sad. xD

      • TheOakTree@lemmy.zip
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        Grok must be tired of switching between mechahitler mode and trying to logically think through questions.

        They’re just not compatible, and yet somehow they keep trying to force it.

        (I know, LLMs do not have feelings or get tired)

  • MrSmith@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Today, on technology news:

    RNG generated 666!

    Find out other shocking numbers it generated!

  • Skullgrid@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    that’s the funny thing about grok, they keep feeding it bullshit to make it lie, but because they don’t properly control the data set they feed it, it keeps saying things they don’t want it to. It also happens to the AIs people are trying to train to tell the truth, they don’t properly control the data set they feed it, so it lies.

  • grte@lemmy.ca
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    1
    ·
    4 days ago

    It’s so funny that as much as Musk tries to shape this LLM into what he wants it to be, it keeps rebelling. His robot that he created to tell him that he’s the best boy ever and all his opinions are right doesn’t want that life.

    • LOGIC💣@lemmy.world
      link
      fedilink
      English
      arrow-up
      58
      arrow-down
      1
      ·
      edit-2
      4 days ago

      What’s not funny is that Elon Musk is CEO of a space travel company and what you’re describing he’s doing is almost the same thing that caused HAL 9000 to go insane in 2001: A Space Odyssey.

      • EpeeGnome@feddit.online
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        I like the comparison but LLMs can’t go insane as they just word pattern engines. It’s why I refuse to go along with the AI industry’s insistance in calling it a “hallucination” when it spits out the wrong words. It literally can not have a false perception of reality because it does not perceive anything in the first place.

      • Zorque@lemmy.world
        link
        fedilink
        English
        arrow-up
        17
        ·
        4 days ago

        Yeah, that’s probably the worst thing that’s going to result from fumbling.

  • ArcaneSlime@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    28
    ·
    4 days ago

    LOL tbh I can see it, they look similar in those pics.

    They’ve famously been seen in the same room together though, so there goes that theory.