Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.

From the article:

Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.

Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      20 hours ago

      Is it, though? Consider that many organizations both private and public have been using algorithms since the 1990s, long before anyone knew what an algorithm was. They had entire departments dedicated to running optimization algorithms. Amazon has algorithms deciding what products to show you, what prices to charge, and how to route packages. Airlines have algorithms that adjust ticket prices hundreds of times a day based on data you didn’t even know existed, and health insurance companies have actuarial models that process millions of data points to decide your rates. And what have you got? A web browser with multiple tabs open, a spreadsheet program, and Google Search. Seems like a rather one-sided fight, no?