• silasmariner@programming.dev
    link
    fedilink
    English
    arrow-up
    1
    ·
    16 hours ago

    Honestly I agree with gbzm here. ‘I can’t see why I shouldn’t be possible’ is a far cry from ‘it’s inevitable’… And I’d hardly say we’re sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)

    Ppl definitely are selling natural language interfaces as if they’re intelligent. It’s convincing, I guess, to some. It’s an illusion though

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      15 hours ago

      This discussion isn’t about LLMs per se.

      However, I hope you’re right. Unfortunelately, I’ve yet to meet anyone able to convince me that I’m wrong.

      • silasmariner@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        14 hours ago

        We can’t know that I’m wrong or you’re wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y’know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 hours ago

          I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.

          And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.

          • silasmariner@programming.dev
            link
            fedilink
            English
            arrow-up
            1
            ·
            14 hours ago

            Yeah see I don’t agree with that base premise, that it’s as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?

            • Perspectivist@feddit.uk
              link
              fedilink
              English
              arrow-up
              1
              ·
              14 hours ago

              It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.