Big tech boss tells delegates at Davos that broader global use is essential if technology is to deliver lasting growth

  • frog_brawler@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    6
    ·
    edit-2
    3 hours ago

    I’m going to express a perspective that goes against a lot of the Lemmy hive-mind, so I’m sure I’ll be downvoted, but here goes anyway cause I don’t give no fucks about downvotes…

    When I first heard about LLMs (around 2021 I think), I was pretty neutral on it. “Meh, sounds like some annoying bullshit to piss off customers and attempt to take call-center jobs away. Probably going to flop…” was the perspective I had around the time of my first introduction.

    Around 2022 or 2023, the place I was working at the time started heavily pushing people to use it. I was a Cloud Engineer at the time. There wasn’t a lot of justification as to “why” we should be using it, other than, “it’ll make things easier.” Because of my org pushing large numbers of engineers and developers into using something without demonstrating an actual benefit, or reason why it will help immediately caused my brain to signal red flags and become suspicious of it. My neutrality shifted into an ANTI-AI sentiment.

    By the end of 2023 (or so), I was pretty vehemently against AI. I don’t need to articulate on that too much, we all know the reasons why. Towards the end of last year, I found myself in a weird spot of starting a business and didn’t know wtf I was doing, at all. That was the first time that I had a good experience with using Claude. It guided me through the process of creating an LLC, and a bunch of other bullshit that’s associated with that. I’ve had a few good experiences with it for the past few months… it’s a lot better than it was.

    At this point, I’ve come to the conclusion that a large problem with AI was what happened in (what I’m calling) the early days (2020-2021), of pushing people to adopt some bullshit that was wholly unsubstantiated, and quite frankly sucked. The expectation for a “boom” was greatly miscalculated, and it still fucking is.

    If companies were starting to push people to use it for the first time in 2025, I think we’d be having a much different conversation about AI / LLMs in 2026. I think it has some viable uses, and it does some of the technical aspects of my role substantially faster than I can do them. However, my concerns around the economic, environmental, and political implications of AI still have me maintaining a perspective that it’s more trouble than it’s worth. In short, “AI isn’t all bad on it’s own… the system we’re trying to add AI to is already fucked and AI is making it worse.”

    • Zoutpeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      2 hours ago

      No the problem is that LLM keep being shoved into products and processes that are not stored for.

      Why the fuck does a fridge need a text-prediction model, or notepad or even google when it isn’t useful for fir most basic searches?

      It has the economics of a bubble, is making or computing extremely expensive and in most cases degrades my experience. Obviously they are running into a wall.

      LLMs used for purpose can be very useful! They will however not lead to AGI.

      Much like my underwear doesn’t need a Bluetooth connection, so do most of these products not need an LLM.

      And we’re not even talking about how there are people in jail for a fraction of the piracy these companies committed out that they used peoples private data to train these models. Based on how IP law is wielded against people we should get a payout for every single time an LLM is queried.

      • frog_brawler@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        3
        ·
        2 hours ago

        It’s being shoved down everyone’s throats for the same reasons it was shoved down everyone’s throats 6 years ago… they need to train this shit so they can replace everyone.

        It has the economics of a bubble, is making or computing extremely expensive and in most cases degrades my experience. Obviously they are running into a wall.

        This screams fraud more than anything to me. It feels more like Enron than pets.com but maybe it’s a bubble. I won’t argue with you there. The rest of this statement, I agree with.

        They will however not lead to AGI.

        I’m not confident about that. How are you confident about that? I’m also not saying you’re wrong… but I don’t think you know this.

        And we’re not even talking about how there are people in jail for a fraction of the piracy these companies committed out that they used peoples private data to train these models. Based on how IP law is wielded against people we should get a payout for every single time an LLM is queried.

        Fully agree with you.