This article describes what ive been thinking about for the last week. How will these billions of investments by big tech actually create something that is significantly better than what we have today already?

There are major issues ahead and im not sure they can be solved. Read the article.

  • proceduralnightshade@lemmy.ml
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 day ago

    tl;dr AI companies are slowly running out of data to train their models; synthetic data is not a viable alternative.

    I can’t remember where I saw it, but someone somewhere on YouTube suspected the next step for OpanAI and such would be to collect user data directly; recording conversations of users and using that data to train models further.

    If I find the vid I will add a link here.

    • 1984@lemmy.todayOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      1 day ago

      Yeah that would be the logical end game since companies have invested billions into this trend now.

  • some_kind_of_guy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    I wonder if AI applications other than just “be a generalist chat bot” would run into the same thing. I’m thinking about pharma, weather prediction, etc. They would still have to “understand” their english-language prompts, but the LLMs can do that just fine today, and could feed systems designed to iteratively solve for problems in those areas. A model feeding into itself or other models doesn’t have to be a bad thing.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      2 days ago

      Only in the sense that those “words” they know are pointers to likely connected words. If the concepts follow alike then, theoretically all good. But beyond FAQs and such I’m not seeing anything that would indicate it’s ready for anything more.

  • Xaphanos@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    2 days ago

    My company is in AI. One of our customers pays for systems capable of the hard computational work to design the drugs to treat Parkinson’s. This is the only newly possible with the newest technology.

    • MysteriousSophon21@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 day ago

      This is acutally one of the most promising applications - AI can screen millions of potential drug compounds and predict protein interactions in hours instead of months, which is why we’re seeing breakthroughs in neurodegenerative disease research.

      • altkey@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        1 day ago

        That’s probably Machine Learning, the root category of tools and the origin of LLMs, not Large Language Models themselves we call ‘AI’. These have many applications they are efficient at gradually explored from the 80s I believe, while the AI boom involving Google, Meta, OpenAI and others is about generalistic chatbots that are bad in just about everything they used in. I’m putting that distinction not because I’m an ass, but because I don’t want the hype wave to get more credibility on the back of real scientifical and technological progress.

        • Womble@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 day ago

          It depends, if they’re use a transformer or diffusion based archetecture I think it would be fair to include it in the same “AI wave” thats been breaking since the release of chat gpt publicly.