My short response. Yes.

  • AmbiguousProps@lemmy.today
    link
    fedilink
    English
    arrow-up
    3
    ·
    7 days ago

    Just because they want that doesn’t mean they understand how it works. It’s not currently feasible from a technical and resource perspective.

    • Lunatique Princess@lemmy.mlOP
      link
      fedilink
      arrow-up
      1
      arrow-down
      3
      ·
      6 days ago

      Them not understanding how it works makes it worse. The fact is they’re still attempting to create it and people are just tolerating it.

          • AmbiguousProps@lemmy.today
            link
            fedilink
            English
            arrow-up
            2
            ·
            6 days ago

            “what they are going to do” is exactly that: marketing. It’s not technically feasible, they just want to ride the bubble a little longer and convince people otherwise (like they’ve convinced you)

            • Lunatique Princess@lemmy.mlOP
              link
              fedilink
              arrow-up
              1
              arrow-down
              3
              ·
              6 days ago

              Do you have any technological understanding of AI or are you just a layman? Do you read white papers, are you familiar with alignment, quantitization, parameters, RLHF? Have you ever used a local LLM and had it argue with you refusing your request? Or are you just some simplified shell of a human who thinks this is purely a capitalist endeavor, while simultaneously not even understanding how it works at all?

              • AmbiguousProps@lemmy.today
                link
                fedilink
                English
                arrow-up
                2
                ·
                edit-2
                6 days ago

                Holy shit, you’re too far gone. Yes, I’ve operated my own local LLM front end for personal use and have run them frequently in the past at a low level with no front end and messed with parameters directly. I’ve modified models as well, and I have a server designed to run them. It’s insane that you think those apply to his topic (billionaires lying to you). You’re just throwing random terminology out to make yourself seem smart and to reinforce your stance, but all that does is make you seem insecure about your knowledge. It does quite the opposite of what you intend.

                All LLMs do is hallucinate, and sometimes, by pure coincidence, get things correct. This is why it’s impossible to get rid of hallucinations. They do not think, they do not have their own goals (refusing a request can be baked in, but that’s no different than programming something with guardrails), and they certainly will not suddenly become sentient. LLMs cannot do that, by design. Perhaps something else could, but LLMs are not that.

                You really had to go to insults for this? I think you need to touch grass and stop believing billionaire marketing. LLMs are not technologically capable of doing what you have been brainwashed to believe. They will crash the economy when the bubble bursts, because MBAs and billionaires have convinced you and rich VCs that they can do more than they actually can.