• 11111one11111@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    13 days ago

    Just for the sake of playing a stoner epiphany style of devils advocate: how does thst differ from how actual logical arguments are proven? Hell, why stop there. I mean there isn’t a single thing in the universe that can’t be broken down to a mathematical equation for physics or chemistry? I’m curious as to how different the processes are between a more advanced LLM or AGI model processing data is compares to a severe case savant memorizing libraries of books using their home made mathematical algorithms. I know it’s a leap and I could be wrong but I thought I’ve heard that some of the rainmaker tier of savants actually process every experiences in a mathematical language.

    Like I said in the beginning this is straight up bong rips philosophy and haven’t looked up any of the shit I brought up.

    I will say tho, I genuinely think the whole LLM shit is without a doubt one of the most amazing advances in technology since the internet. With that being said, I also agree that it has a niche where it will be isolated to being useful under. The problem is that everyone and their slutty mother investing in LLMs are using them for everything they are not useful for and we won’t see any effective use of an AI services until all the current idiots realize they poured hundreds of millions of dollars into something that can’t out perform any more independently than a 3 year old.

    • finitebanjo@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      13 days ago

      First of all, I’m about to give the extreme dumbed down explanation, but there are actual academics covering this topic right now usually using keywords like AI “emergent behavior” and “overfitting”. More specifically about how emergent behavior doesn’t really exist in certain model archetypes and that overfitting increases accuracy but effectively makes it more robotic and useless. There are also studies of how humans think.

      Anyways, human’s don’t assign numerical values to words and phrases for the purpose of making a statistical model of a response to a statistical model input.

      Humans suck at math.

      Humans store data in a much messier unorganized way, and retrieve it by tracing stacks of related concepts back to the root, or fail to memorize data altogether. The values are incredibly diverse and have many attributes to them. Humans do not hallucinate entire documentations up or describe company policies that don’t exist to customers, because we understand the branching complexity and nuance to each individual word and phrase. For a human to describe procedures or creatures that do not exist we would have to be lying for some perceived benefit such as entertainment, unlike an LLM which meant that shit it said but just doesn’t know any better. Just doesn’t know, period.

      Maybe an LLM could approach that at some scale if each word had it’s own model with massive amounts more data, but given their diminishing returns displayed so far as we feed in more and more processing power that would take more money and electricity than has ever existed on earth. In fact, that aligns pretty well with OpenAI’s statement that it could make an AGI if it had Trillions of Dollars to spend and years to spend it. (They’re probably underestimating the costs by magnitudes).

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        12 days ago

        emergent behavior doesn’t really exist in certain model archetypes

        Hey, would you have a reference for this? I’d love to read it. Does it apply to deep neural nets? And/or recurrent NNs?

        • finitebanjo@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          12 days ago

          There is this 2023 study from Stanford which states AI likely do not have emergent abilities LINK

          And there is this 2020 study by… OpenAI… which states the error rate is predictable based on 3 factors, that AI cannot cross below the line or approach 0 error rate without exponentially increasing costs several iterations beyond current models, lending to the idea that they’re predictable to a fault LINK

          There is another paper by DeepMind in 2022 that comes to the conclusion that even at infinite scales it can never approach below 1.69 irreducable error LINK

          This all lends to the idea that AI lacks the same Emergent behavior in Human Language.