Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.

From the article:

Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.

Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

    • Telorand@reddthat.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      No they’re not. That’s just the claptrap the billionaire Tech Bros want you to believe in. “Ooo, AGI is just around the corner! Buy in now to get it first! Ooo!”

      They just have access to militarized versions through specialized LoRAs and no restraints. It’s not anything beyond what’s possible for regular people right now, it’s just that regular people will never get access to the kind of training data needed to achieve the same results (not that the government should be able to, either).

      • Big Bolillo@mgtowlemmy.org
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        edit-2
        18 hours ago

        Some time ago, I read somewhere that the CIA apparently already has a wireless brain–digital interface, and it’s supposedly capable of working over long-range distances. I believe that can be at least 30 years beyond.

        I wouldn’t be surprised if they are already installing LLMs directly into humans brains.

        • Telorand@reddthat.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Okay. Claims are not evidence. “I read it somewhere” is not even close to substantial, because anyone can write anything they want on the Internet. Without evidence or even consensus amongst experts, it just sounds like a conspiracy theory.

          The CIA is often the bogeyman, because they do lots in secret, and the government is inherently untrustworthy. That doesn’t mean they have wireless brain interfaces, however.

          • Big Bolillo@mgtowlemmy.org
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            18 hours ago

            Check out the FOIA website, keywords, remote viewing, telepathy, mk-ultra and stargate there is already a bunch of released documents unfortunately the majority of them are excessively redacted. Mostly are from around 80s to 00s, a long road since back then. That’s why I wouldn’t be surprised if they are already installing LLMs into humans brains.

            I believe I read about the human-digital interface from a leaked unredacted document I found anywhere in the deepweb, but as you say anyone can just claim online something is whatever and there is no way to prove it.

            • Telorand@reddthat.com
              link
              fedilink
              English
              arrow-up
              2
              ·
              15 hours ago

              Human digital interfaces aren’t a secret, but other things like remote-viewing, etc. have been known about for a long time, and they were failures. There’s even a whole movie about it called Men Who Stare At Goats. Pointing to a few examples of actual conspiracies or weird projects doesn’t mean every claim has validity. It just means the government is generally untrustworthy, but that also means you need to take each claim individually, in practice. You can’t just generalize and say that “government untrustworthy, therefore believe the opposite of anything they say.” That’s being reactive, not skeptical.

              That’s not to say that there’s not scary tech out there (it’s been demonstrated that they can not only see but hear conversations through walls by interpolating Wi-Fi signals), but it’s all very much within the realm of science, not the paranormal.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      I don’t really think that’s true, because, again, idk why people here think this is all a bad take. It’s real simple. For decades, corporations and institutions have had the upper hand. They have vast resources to spend on computational power and enterprise software and algorithms to keep things asymmetrically efficient. Algorithms don’t sleep, don’t get tired, they follow one creed ABO, Always Be Optimizing. But that software costs a lot of money, and you have to know all this other stuff to know how to use it correctly. Then along comes the language model. Suddenly, you just talk to the computer the way you’d talk to another human, and you get what you ask for.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        19 hours ago

        Then along comes the language model. Suddenly, you just talk to the computer the way you’d talk to another human, and you get what you ask for.

        That’s not at all how LLMs work, and that’s why people are saying this whole premise is a bad take. Not only do LLMs get things wrong, they do it in such a way that it completely fabricates answers at times; they do this, because they’re pattern generation engines, not database parsers. Algorithms don’t do that, because they digest a set of information and return a subset of that information.

        Also, so what if algorithms cost a lot of money? That’s not really an argument for why LLMs level the playing field. They’re not analogous to each other, and the LLMs being foisted on the unassuming public by the billionaires are certainly not some kind of power leveler.

        Furthermore, it takes a fuckton more processing resources to run an LLM than it does an algorithm, and I’m just talking about cycles. If we went beyond just cycles, the relative power needed to solve the same problem using an LLM versus an algorithm is not even close. There’s an entire branch of mathematics dedicated to algorithm analysis and optimization, but you’ll find no such thing for LLMs, because they’re not remotely the same.

        No, all we have are fancy chatbots at the end of the day that hallucinate basic facts, not especially different from the annoying Virtual Assistants of a few years ago.

        • AutistoMephisto@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          18 hours ago

          Also, so what if algorithms cost a lot of money? That’s not really an argument for why LLMs level the playing field.

          It’s not just the money. It’s the knowledge and expertise needed to use the algorithms, at all. It’s knowing how to ask the algorithm for the information you want in a way that it can understand, in knowing how to visualize the data points it gives. As you said, there’s an entire field of mathematics dedicated to algorithm analysis and optimization. Not everyone has the time, energy, and attention to learn that stuff. I sure don’t, but damn if I am tired of having to rely on “Zillow and a prayer” if I want to get a house or apartment.

          • Telorand@reddthat.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            15 hours ago

            It’s not just the money. It’s the knowledge and expertise needed to use the algorithms, at all…Not everyone has the time, energy, and attention to learn that stuff.

            I agree. That does not mean that LLMs are leveling the playing field with people who can’t/won’t get an education regarding computer science (and let’s not forget that most algorithms don’t just appear; they’re crafted over time). LLMs are easy, but they are not better or even remotely equivalent. It’s like saying, “Finally, the masses can tell a robot to build them a table,” and saying that the expertise of those “elite” woodworkers is no longer needed.

            …damn if I am tired of having to rely on “Zillow and a prayer” if I want to get a house or apartment.

            And this isn’t a problem LLMs can solve. I feel for you, I do. We’re all feeling this shit, but this is a capitalism problem. Until the ultracapitalists who are making these LLMs (OpenAI, Google, Meta, xAI, Anthropic, Palantir, etc.) are no longer the drivers of machine learning, and until the ultracapitalist companies stop using AI or algorithms to decide who gets what prices/loans/rental rates/healthcare/etc., we will not see any kind of level playing field you or the author are wishing for.

            You’re looking at AI, ascribing it features and achievements it doesn’t deserve, then wishing against all the evidence that it’s solving capitalism. It’s very much not, and if anything, it’s only exacerbating the problems caused by it.

            I applaud your optimism—I was optimistic about it once, too—but it has shown, time and again, that it won’t lead to a society not governed by the endless chasing of profits at the expense of everyone else; it won’t lead to a society where the billionaires and the rest of us compete on equal footing. What we regular folk have gotten from them will not be their undoing.

            If you want a better society where you don’t have to claw the most meager of scraps from the hand of the wealthy, it won’t be found here.

            • AutistoMephisto@lemmy.worldOP
              link
              fedilink
              English
              arrow-up
              2
              ·
              13 hours ago

              I’ll say one thing for this post and the resulting discussion, it’s caused me to fall down the rabbit hole that is AI price fixing. How else can it be that available residences increased but so did rent? And so did everything else?

    • Artisian@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      19 hours ago

      We haven’t invested sufficiently in them for this too be plausible. They’re incentives haven’t been to get very ahead either.