Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.
From the article:
Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.
Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.


The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:
Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.
I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.
You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.
Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.
Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.
Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.
This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.
The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.
From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.
Lmao DO NOT use an LLM for this. They are not reliable enough for this.
You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.
Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.
We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.
No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.
Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)
I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.
I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.
Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.
Oh, I just about hit the character limit. I guess I’ll stop there.
Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.