• just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    3
    ·
    4 hours ago

    LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

    The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

    • nymnympseudonym@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      3
      ·
      2 hours ago

      LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension

      And how do you think animal brains develop comprehension…?

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.

        Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.

        So you sort of sarcastically answered your own stupid question 🤌

      • just_another_person@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        11 minutes ago

        Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.

        I’ll never speak to this topic again since I’ve clearly been bested with your knowledge from a Google Blog.

        • Communist@lemmy.frozeninferno.xyz
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          2
          ·
          edit-2
          1 hour ago

          yes, google reported about their ai discovering a novel cancer treatment, of course they did?

          now tell me about how it isn’t true. Do you have anything of substance to discredit this?

          this reeks of confirmation bias, did you even try to invalidate your preconcieved notions?

          • just_another_person@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            edit-2
            1 hour ago

            I sure do. Knowledge, and being in the space for a decade.

            Here’s a fun one: go ask your LLM why it can’t create novel ideas, it’ll tell you right away 🤣🤣🤣🤣

            LLMs have ZERO intentional logic that allow it to even comprehend an idea, let alone craft a new one and create relationships between others.

            I can already tell from your tone you’re mostly driven by bullshit PR hype from people like Sam Altman , and are an “AI” fanboy, so I won’t waste my time arguing with you. You’re in love with human-made logic loops and datasets, bruh. There is not now, nor was there ever, a way for any of it to become some supreme being of ideas and knowledge as you’ve been pitched. It’s super fast sorting from static data. That’s it.

            You’re drunk on Kool-Aid, kiddo.

            • Communist@lemmy.frozeninferno.xyz
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              1 hour ago

              You sound drunk on kool-aid, this is a validated scientific report from yale, tell me a problem with the methodology or anything of substance.

              so what if that’s how it works? It clearly is capable of novel things.

              • just_another_person@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                1
                ·
                edit-2
                13 minutes ago

                🤦🤦🤦 No…it really isn’t:

                Teams at Yale are now exploring the mechanism uncovered here and testing additional AI-generated predictions in other immune contexts.

                Not only is there no validation, they have only begun even looking at it.

                Again: LLMs can’t make novel ideas. This is PR, and because you’re unfamiliar with how any of it works, you assume MAGIC.

                Like every other bullshit PR release of it’s kind, this is simply a model being fed a ton of data and running through thousands of millions of iterative segments testing outcomes of various combinations of things that would take humans years to do. It’s not that it is intelligent or making “discoveries”, it’s just moving really fast.

                You feed it 102 combinations of amino acids, and it’s eventually going to find new chains needed for protein folding. The thing you’re missing there is:

                1. all the logic programmed by humans
                2. The data collected and sanitized by humans
                3. The task groups set by humans
                4. The output validated by humans

                It’s a tool for moving fast though data, a.k.a. A REALLY FAST SORTING MECHANISM

                Nothing at any stage if developed, is novel output, or validated by any models, because…they can’t do that.

                  • Communist@lemmy.frozeninferno.xyz
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    2
                    ·
                    56 minutes ago

                    He knows the basics, it’s just that they don’t lead to any of the conclusions he’s claiming they do. He also boldly assumes that everyone who disagrees with him doesn’t know anything. He’s a beast of confirmation bias.

                • Communist@lemmy.frozeninferno.xyz
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  59 minutes ago

                  You addressed that they haven’t tested the hypothesis completely while completely overlooking the fact that an ai suggested a novel hypothesis… even if it comes out to be wrong it is still undeniably a novel hypothesis. This is what was validated by yale…

                  you have still failed to answer the question. You’re also neglecting to include an explanation of temperature in your argument, which may be relevant here.