People have been betting on independent reasoning as an emergent property of AI without much success so far. So it was exciting when OpenAI said their AI had scored at a Gold Medal level at the International Mathematical Olympiad (IMO), a test of Math reasoning among the best of high school math students.

However, Australian mathematician Terence Tao says it may not be as impressive as it seems. In short, the test conditions were potentially far easier for the AI than the humans, and the AI was given way more time and resources to achieve the same results. On top of which, we don’t know how many wrong results there were before OpenAI selected the best. Something else that doesn’t happen with the human test.

There’s another problem, too. Unlike with humans, AI being good at Math is not a good indicator for general reasoning skills. It’s easy to copy techniques from the corpus of human knowledge it’s been trained on, which gives the semblance of understanding. AI still doesn’t seem good at transferring that reasoning to novel, unrelated problems.

  • phdepressed@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    3
    ·
    4 days ago

    More likely believes in the con himself. All these rich people are convincing themselves that it just needs “a bit more” when the reality is that an LLM is closer to t9 text prediction than it is gAI from sci-fi. The gAI they want needs a different paradigm of computing few if any AI researchers actually believe that increasing the size of datasets/training will result in a gAI.

    A similar thing occured in genetics, we had the whole thing about sequencing the whole human genome and then about sequencing populations/the world. And while these sequences and analyses have been very helpful the data can not given a full understanding of genetics regardless of how many people you sequence. Updated analysis algorithms can do better than early stuff but they still won’t understand the human genome.