• raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    2 days ago

    That’s leaving out vital information however. Certain types of brains (e.g. mammal brains) can derive abstract understanding of relationships from reinforcement learning. A LLM that is trained on “letting go of a stone makes it fall to the ground” will not be able to predict what “letting go of a stick” will result in. Unless it is trained on thousands of other non-stick objects also falling to the ground, in which case it will also tell you that letting go of a gas balloon will make it fall to the ground.

    • Best_Jeanist@discuss.online
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      19 hours ago

      Well that seems like a pretty easy hypothesis to test. Why don’t you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it’ll say the balloon falls, so prove it.

      • eskimofry@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        18 hours ago

        that’s quite dishonest because LLMs have had all manner of facts pre-trained on it with datacenters all over the world catering to it. If you think it can learn in the real world without many many iterations and it still needs pushing and proding on simple tasks that humans perform then I am not convinced.

        It’s like saying a chess playing computer program like stockfish is a good indicator of intelligence because it knows to play chess but you forgot that the human chess players’ expertise was used to train it and understand what makes a good chess program.