• Best_Jeanist@discuss.online
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    19 hours ago

    Well that seems like a pretty easy hypothesis to test. Why don’t you log on to chatgpt and ask it what will happen if you let go of a helium balloon? Your hypothesis is it’ll say the balloon falls, so prove it.

    • eskimofry@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      18 hours ago

      that’s quite dishonest because LLMs have had all manner of facts pre-trained on it with datacenters all over the world catering to it. If you think it can learn in the real world without many many iterations and it still needs pushing and proding on simple tasks that humans perform then I am not convinced.

      It’s like saying a chess playing computer program like stockfish is a good indicator of intelligence because it knows to play chess but you forgot that the human chess players’ expertise was used to train it and understand what makes a good chess program.