• Avid Amoeba@lemmy.caOP
    link
    fedilink
    English
    arrow-up
    27
    ·
    edit-2
    2 hours ago

    Also he thinks LLMs are a dead end for getting smarter AI while Zuck is doubling down on them.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      Well, he’s got a bowtie and Zuck wears an oversized t-shirt with Bugs Bunny dressed as a 90s rapper.

      They certainly can’t both be wrong, can they?

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      4
      ·
      edit-2
      1 hour ago

      Getting Smarter AI < Making More Money

      Is there more money in smarter AI or in manipulating people’s voting patterns with the tools you’ve got?

      I saw Suck at Trump’s inauguration, I didn’t see this Chinese feller there.

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        6
        ·
        41 minutes ago

        this Chinese feller

        He’s French, actually.

        This is one of the three people that basically invented Deep Learning . One of the others is Geoffrey Hinton, who got the Nobel Prize in 2024

        No matter what you think of LeCun or his opinions… he’s damn well worth listening to with attention and respect.

  • tal@lemmy.today
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    1
    ·
    edit-2
    2 hours ago

    Meta’s chief AI scientist and Turing Award winner Yann LeCun plans to leave the company to launch his own startup focused on a different type of AI called “world models,” the Financial Times reported.

    World models are hypothetical AI systems that some AI engineers expect to develop an internal “understanding” of the physical world by learning from video and spatial data rather than text alone.

    Sounds reasonable.

    That being said, I am willing to believe that an LLM could be part of an AGI. It might well be an efficient way to incorporate a lot of knowledge about the world. Wikipedia helps provide me with a lot of knowledge, for example, though I don’t have a direct brain link to it. It’s just that I don’t expect an AGI to be an LLM.

    EDIT: Also, IIRC from past reading, Meta has separate groups aimed at near-term commercial products (and I can very much believe that there might be plenty of room for LLMs here) and aimed advanced AI. It’s not clear to me from the article whether he just wants more focus on advanced AI or whether he disagrees with an LLM focus in their afvanced AI group.

    I do think that if you’re a company building a lot of parallel compute capacity now, that to make a return on that, you need to take advantage of existing or quite near-future stuff, even if it’s not AGI. Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

    https://datacentremagazine.com/news/why-is-meta-investing-600bn-in-ai-data-centres

    Meta reveals US$600bn plan to build AI data centres, expand energy projects and fund local programmes through 2028

    So Meta probably cannot only be doing AGI work.

    • Avid Amoeba@lemmy.caOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      57 minutes ago

      I saw a short interview with him by France 24 and he mainy said he thinks the current direction of the research teams at Meta is wrong. He made a contrast between top-down push to deliver org as opposed to long leash, leave the researches to experiment with things. He said Meta shifted from the latter to the former and he doesn’t agree with the approach.

    • tomiant@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 hour ago

      Look, AGI would require basically a human brain. LLMs are a very specific subset mimicking a (important) part of the brain- our language module. There’s more, but I got interrupted by a drunk guy who needs my attention, I’ll be back.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 hour ago

      Sounds reasonable.

      Does it, though? Feels like we’re just rewriting the sales manual without thinking about what “learning from video” would actually entail.

      Doesn’t make sense to build a lot of compute capacity, then spend fifteen years banging on research before you have something to utilize that capacity.

      There’s an old book from back in 2008 - Killing Sacred Cows: Overcoming the Financial Myths That Are Destroying Your Prosperity - that a lot of the modern Techbros took perhaps too closely to heart. It posited that chasing the next generation of technological advancement was more important than keeping your existing revenue streams functional. And you really should kill the golden goose if it means you’ve got a shot at new one in the near future.

      What these Tech Companies are chasing is the Next Big Thing, even when they don’t really understand what that is. And they’re so blindly devoted to advancing the technological curve that they really will blow a trillion dollars (mostly of other people’s money) on whatever it is they think that might be.

      The real problem is that these guys are, largely, uncreative and incurious and not particularly intelligent. So they leap on fads rather than pursuing meaningful Blue Sky Research. And that gives us this endless recycling of Sci-Fi tropes as a stand in for material investments in productive next generation infrastructure.

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      2 hours ago

      LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension.

      The system he’s talking about is more about using NNL, which builds new relationships to things that persist. It’s deferential relationship learning and data path building. Doesn’t exist yet, so if he has some ideas, it may be interesting. Also more likely to be the thing that kills all human.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          14 minutes ago

          Lol 🤣 I’m SO EMBARRASSED. You’re totally right and understand these things better than me after reading a GOOGLE BLOG ABOUT THEIR PRODUCT.

          I’ll speak to this topic again since I’ve clearly been tested with your knowledge from a Google Blog.

          • Communist@lemmy.frozeninferno.xyz
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            edit-2
            8 minutes ago

            yes, google reported about their ai discovering a novel cancer treatment, of course they did?

            now tell me about how it isn’t true. Do you have anything of substance to discredit this?

      • nymnympseudonym@piefed.social
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        40 minutes ago

        LLMs are just fast sorting and probability, they have no way to ever develop novel ideas or comprehension

        And how do you think animal brains develop comprehension…?

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 minutes ago

          Animal brains have pliable neuron networks and synapses to build and persist new relationships between things. LLMs do not. This is why they can’t have novel or spontaneous ideation. They don’t “learn” anything, no matter what Sam Altman is pitching you.

          Now…if someone develops this ability, then they might be able to move more towards that…which is the point of this article and why the guy is leaving to start his own project doing this thing.

          So you sort of sarcastically answered your own stupid question 🤌

  • youmaynotknow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    1 hour ago

    Yai, another BS AI slop data grabbing AI company, because we can’t have enough of that shit.