Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.
From the article:
Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.
Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.


Directionally correct, but it does require self hosted agentic models that can compete with the automation running on corporate side. This is not obvious. It will be a new equilibria; maybe just a few more hours of poorly done work by a handful of consumers is enough to break some monopolies. Or maybe everyone will be attached to OpenAI compute, and we’ve just gained a new middleman for most interactions.