• justOnePersistentKbinPlease@fedia.io
    link
    fedilink
    arrow-up
    42
    arrow-down
    3
    ·
    1 day ago

    LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.

    It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.

      • justOnePersistentKbinPlease@fedia.io
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        1 day ago

        They are the closest things to AI that we have. The so called LRMs fake their reasoning.

        They do not think or reason. We are at the very best decades away from anything resembling an AI.

        The best LLMs can do is a mass effect(1) VI and that is still more than a decade away

        • Perspectivist@feddit.uk
          link
          fedilink
          English
          arrow-up
          2
          ·
          22 hours ago

          The chess opponent on Atari is AI - we’ve had AI systems for decades.

          An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.

          • Sconrad122@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            11 hours ago

            Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never “discover” AGI. This comment is brought to you by optimistic existentialism

    • m532@lemmygrad.ml
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      No, the first chatbots didn’t have neural networks inside. They didn’t have intelligence.

      • booty [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        1 day ago

        LLMs aren’t intelligence. We’ve had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.