• Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 day ago

    I’m not. You can’t lose trust on something if you never trusted it to begin with.

    I. Talent churn reveals short AGI timelines are wish, not belief

    Trying to build AGI out of LLMs and similar is like trying to build a house, by randomly throwing bricks. No cement, no foundation, just the bricks. You might want to get some interesting formation of bricks, sure. But you won’t get a house.

    And yes, of course they’re bullshitting with all this “AGI IS COMING!”. Odds are the people in charge of those companies know the above. But lying for your own benefit, when you know the truth, is called “marketing”.

    II. The focus on addictive products shows their moral compass is off

    “They”, who? Chatbots are amoral, period. Babbling about their moral alignment is like saying your hammer or chainsaw is morally bad or good. It’s a tool dammit, treat it as such.

    And when it comes to the businesses, their moral alignment is a simple “money good, anything between money and us is bad”.

    III. The economic engine keeping the industry alive is unsustainable

    Pretty much.

    Do I worry that the AI industry is a quasi-monopoly? No, I don’t understand what that means.

    A quasi-monopoly, in a nutshell, is when a single entity or group of entities have an unreasonably large control over a certain industry/market, even if not being an “ackshyual” monopoly yet.

    A funny trait of the fake free-market capitalist that O’Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.

    That’s capitalism. “I luuuv freerum!” until it gets in the way of the money.

    IV. They don’t know how to solve the hard problems of LLMs

    Large language models (LLMs) still hallucinate. Over time, instead of treating this problem as the pain point it is, the industry has shifted to “in a way, hallucinations are a feature, you know?”

    Or rather, they shifted the bullshit. They already knew it was an insolvable problem…

    …because hallucinations are simply part of the LLM doing what it’s supposed to do. It doesn’t understand what it’s outputting; it doesn’t know if glue is a valid thing to add to a pizza, or if humans should eat rocks. It’s simply generating text based on the corpus fed into it, plus some weighting.

    V. Their public messaging is chaotic and borders on manipulative

    O rly.

    Stopped reading here. It’s stating the obvious, and still missing the point.

    • Fandangalo@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      The moral compass bit is hilarious. Large swaths of tech haven’t had morals in decades when it comes to business, if ever. See google’s canary “do good” being gone. People have fully drank the “greed is good” koolaid for years.

  • iAvicenna@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    edit-2
    1 day ago

    LLMs are good to get keywords that may be relevant to a topic you are interested in (but don’t know much about) and then you can go search for it in a more targeted manner. Unfortunately google has become so bad that LLM actually gives more relevant answers to vague questions. Then the keywords in those answers often get you where you want to go with more research and overall shorten the time to get there.

    For instance if you ask it a simple coding question, it generally suggests correct functions, libraries etc to use even though the code may be buggy. So it makes up for a good starting point for your search.

    That being said I am not sure this is worth the long term damage the AI industry might cause.

    • Lvxferre [he/him]@mander.xyz
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 day ago

      It’s basically my experience with translation, too: asking a LLM is a decent way to look for potential ways to translate a specific problematic word, so you can look them up in a dic and see which one is the best. It’s also a decent way to generate simple conjugation/declension tables. But once you tell it to translate any chunk of meaningful text, there’s a high chance it’ll shit itself, and output something semantically, pragmatically, and stylistically bad.