Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.
From the article:
Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.
Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.
The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:
Corporations have been running algorithms for decades.
Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.
Markets weren’t inefficient because technology didn’t exist to make them efficient. Markets were asymmetrically efficient on purpose. One side had computational power. The other side had a browser and maybe some browser tabs open for comparison shopping.
I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.
Consider what it took to use business intelligence software in 2015. […] Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.
You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.
The moment ChatGPT became public, people started using it to avoid work they hated. Not important work. Not meaningful work. The bureaucratic compliance tasks that filled their days without adding value to anything.
Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.
Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.
People weren’t using ChatGPT to think. They were using it to stop pretending that performance reviews, status update emails, and quarterly reports required thought.
Basically, users used it to create the layer of communication that existed to satisfy organizational requirements rather than to advance any actual goal.
Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.
This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.
You want the cheapest milk within ten miles? You can build that.
The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.
You want to track price changes across every retailer in your area? You can do that now
From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.
You want something to read every clause of your insurance policy and identify the loopholes?
Lmao DO NOT use an LLM for this. They are not reliable enough for this.
You want an agent that will spend forty hours fighting a medical billing error that you’d normally just pay because fighting it would cost more in time than the bill? You can have that.
You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.
The marginal cost of algorithmic labor has effectively collapsed. Using a GPT-5.2–class model, pricing is on the order of $0.25 per million input tokens and about $2.00 per million output tokens. A token is roughly three-quarters of a word, which means one million tokens equals about 750,000 words. Even assuming a blended input/output cost of roughly $1.50 per million tokens, you can process 750,000 words for about $1.50. War and Peace is approximately 587,000 words, meaning you can run an AI across one of the longest novels ever written for around a dollar. That’s not intelligence becoming cheaper. That’s the marginal cost of cognitive labor approaching zero.
Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.
We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.
Institutions built their advantages on exhaustion tactics. They had more time, more money, and more stamina than you did. They could bury you in paperwork. They could drag out disputes. They could wait you out. That strategy assumed you had finite patience and finite resources. It assumed you’d eventually give up because you had other things to do.
An AI assistant breaks that assumption.
No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.
Usage numbers tell the story clearly. ChatGPT reached 100 million monthly active users in two months. That made it the fastest-growing consumer application in history. TikTok took nine months to hit 100 million users. Instagram took two and a half years. The demand was obviously already there. People were apparently just waiting for something like this to exist.
Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)
Venture capital funding for generative AI companies reached $25.2 billion in 2023 according to PitchBook data. That was up from $4.5 billion in 2022. Investment wasn’t going into making better algorithms. It was going into making those algorithms accessible.
I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.
DoNotPay, an AI-powered consumer advocacy service, claimed to help users fight more than 200,000 parking tickets before the company pivoted to other services. LegalZoom reported that AI-assisted document preparation reduced the time required to create basic legal documents by 60% in 2023.
I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.
The credential barrier still exists. You can’t get a prescription from ChatGPT. The legal liability still flows through licensed professionals. The system still requires human gatekeepers. The question is how long those requirements survive when the public realizes they’re paying $200 for a consultation that an AI handles better for pennies.
Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.
Oh, I just about hit the character limit. I guess I’ll stop there.
Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.SQL is a visualization tool?
It’s right in the name, Structured Qisualisation Language (SQL).
Yeah, idk, I’m pretty sure the author meant “SAP”, but then why SQL?
Bad take.
Then you should be able to easily give criticisms.
Is it, though? Consider that many organizations both private and public have been using algorithms since the 1990s, long before anyone knew what an algorithm was. They had entire departments dedicated to running optimization algorithms. Amazon has algorithms deciding what products to show you, what prices to charge, and how to route packages. Airlines have algorithms that adjust ticket prices hundreds of times a day based on data you didn’t even know existed, and health insurance companies have actuarial models that process millions of data points to decide your rates. And what have you got? A web browser with multiple tabs open, a spreadsheet program, and Google Search. Seems like a rather one-sided fight, no?
It is.
Wow. Don’t even know enough to elaborate, so you just use 2-word sentences like some asshole.
That’s right. (How are we counting contractions? Still two words?)
I’m always up for a good AI dystopia article, but this is pretty poorly written, taking a very long time to say very little new or interesting. For this reason I wouldn’t be surprised if the author used AI assistance in writing it, which would certainly tell you something about the author’s objectivity. (It has a lot of earmarks of recent-model AI essay writing, like repeated use of the rule of threes, though I admit a human could have produced it. )
The thesis appears to be that AI can be an equalizer to put individuals on equal footing to corporate data processing tasks. But conversely that it may not be because viability, quality and reliability depends on who controls the model and whether it hallucinates in critical or non-critical ways. Thanks for the clarity, article.
None of this is new thought, but just another part of an inherently AI-normalizing line of thinking that AI is just another democratizing technological tool (but that could be used for evil - or good! - or evil!). The author addresses some of the AI flaws but ends almost where it began, with that flawed premise, which elides how unlike other tools, AI actually degrades our abilities to think and communicate once we start relying on it. The article doesn’t address that communication, meaning, thought, and reliability are degraded when either individual or corporate systems integrate AI.
Instead, the author would like you to think individuals can level a playing field by using AI against corporate algorithms. And sure, a person denied a medical claim by a health insurer low effort AI can now write a generic low effort appeal, but that appeal can just a efficiently continue to be denied by better funded AI. It’s a spurious and illusory benefit to the individual.
What truly matters and is unaffected by consumer AI use is power - political and corporate power. AI just floods the zone with more output, but the result of us all adopting AI will change nothing to the power imbalance in our system. The solution to low effort slop won’t be more low effort slop - we’d just be burying ourselves deeper in it.
What truly matters and is unaffected by consumer AI use is power - political and corporate power.
Corporate algorithms gave them that power, or at least have been helping them to maintain it for decades. The article uses the very real example of RealPage, whose YieldStar software was helping landlords manage over 3 million rental properties in the US by 2022. Ultimately it took ProPublica to pull back the curtain on a computed market where an algorithm was telling landlords how much to charge tenants for a majority of the market. And even then, I don’t think it’s stopped. Landlords are still coordinating rent prices across the vast majority of rental properties, and all the common folk has to help is, like the article says, “Zillow and a prayer”.
Ultimately it took ProPublica to pull back the curtain on a computed market where an algorithm was telling landlords how much to charge tenants for a majority of the market. And even then, I don’t think it’s stopped.
This is exactly my point. The ability for companies to gouge consumers is exacerbated by algorithms, sure. But they have power because the regulatory rules are either in their favor or not.
Even exposing it as you note didn’t change it. Likewise individual consumers don’t have the ability to change it. It’s a red herring and false solution to say “AI can fix it.”
Directionally correct, but it does require self hosted agentic models that can compete with the automation running on corporate side. This is not obvious. It will be a new equilibria; maybe just a few more hours of poorly done work by a handful of consumers is enough to break some monopolies. Or maybe everyone will be attached to OpenAI compute, and we’ve just gained a new middleman for most interactions.
Three letter agencies are 50 years beyond of what is publicly accessible.
No they’re not. That’s just the claptrap the billionaire Tech Bros want you to believe in. “Ooo, AGI is just around the corner! Buy in now to get it first! Ooo!”
They just have access to militarized versions through specialized LoRAs and no restraints. It’s not anything beyond what’s possible for regular people right now, it’s just that regular people will never get access to the kind of training data needed to achieve the same results (not that the government should be able to, either).
Some time ago, I read somewhere that the CIA apparently already has a wireless brain–digital interface, and it’s supposedly capable of working over long-range distances. I believe that can be at least 30 years beyond.
I wouldn’t be surprised if they are already installing LLMs directly into humans brains.
Okay. Claims are not evidence. “I read it somewhere” is not even close to substantial, because anyone can write anything they want on the Internet. Without evidence or even consensus amongst experts, it just sounds like a conspiracy theory.
The CIA is often the bogeyman, because they do lots in secret, and the government is inherently untrustworthy. That doesn’t mean they have wireless brain interfaces, however.
Check out the FOIA website, keywords, remote viewing, telepathy, mk-ultra and stargate there is already a bunch of released documents unfortunately the majority of them are excessively redacted. Mostly are from around 80s to 00s, a long road since back then. That’s why I wouldn’t be surprised if they are already installing LLMs into humans brains.
I believe I read about the human-digital interface from a leaked unredacted document I found anywhere in the deepweb, but as you say anyone can just claim online something is whatever and there is no way to prove it.
Human digital interfaces aren’t a secret, but other things like remote-viewing, etc. have been known about for a long time, and they were failures. There’s even a whole movie about it called Men Who Stare At Goats. Pointing to a few examples of actual conspiracies or weird projects doesn’t mean every claim has validity. It just means the government is generally untrustworthy, but that also means you need to take each claim individually, in practice. You can’t just generalize and say that “government untrustworthy, therefore believe the opposite of anything they say.” That’s being reactive, not skeptical.
That’s not to say that there’s not scary tech out there (it’s been demonstrated that they can not only see but hear conversations through walls by interpolating Wi-Fi signals), but it’s all very much within the realm of science, not the paranormal.
I don’t really think that’s true, because, again, idk why people here think this is all a bad take. It’s real simple. For decades, corporations and institutions have had the upper hand. They have vast resources to spend on computational power and enterprise software and algorithms to keep things asymmetrically efficient. Algorithms don’t sleep, don’t get tired, they follow one creed ABO, Always Be Optimizing. But that software costs a lot of money, and you have to know all this other stuff to know how to use it correctly. Then along comes the language model. Suddenly, you just talk to the computer the way you’d talk to another human, and you get what you ask for.
Then along comes the language model. Suddenly, you just talk to the computer the way you’d talk to another human, and you get what you ask for.
That’s not at all how LLMs work, and that’s why people are saying this whole premise is a bad take. Not only do LLMs get things wrong, they do it in such a way that it completely fabricates answers at times; they do this, because they’re pattern generation engines, not database parsers. Algorithms don’t do that, because they digest a set of information and return a subset of that information.
Also, so what if algorithms cost a lot of money? That’s not really an argument for why LLMs level the playing field. They’re not analogous to each other, and the LLMs being foisted on the unassuming public by the billionaires are certainly not some kind of power leveler.
Furthermore, it takes a fuckton more processing resources to run an LLM than it does an algorithm, and I’m just talking about cycles. If we went beyond just cycles, the relative power needed to solve the same problem using an LLM versus an algorithm is not even close. There’s an entire branch of mathematics dedicated to algorithm analysis and optimization, but you’ll find no such thing for LLMs, because they’re not remotely the same.
No, all we have are fancy chatbots at the end of the day that hallucinate basic facts, not especially different from the annoying Virtual Assistants of a few years ago.
Also, so what if algorithms cost a lot of money? That’s not really an argument for why LLMs level the playing field.
It’s not just the money. It’s the knowledge and expertise needed to use the algorithms, at all. It’s knowing how to ask the algorithm for the information you want in a way that it can understand, in knowing how to visualize the data points it gives. As you said, there’s an entire field of mathematics dedicated to algorithm analysis and optimization. Not everyone has the time, energy, and attention to learn that stuff. I sure don’t, but damn if I am tired of having to rely on “Zillow and a prayer” if I want to get a house or apartment.
It’s not just the money. It’s the knowledge and expertise needed to use the algorithms, at all…Not everyone has the time, energy, and attention to learn that stuff.
I agree. That does not mean that LLMs are leveling the playing field with people who can’t/won’t get an education regarding computer science (and let’s not forget that most algorithms don’t just appear; they’re crafted over time). LLMs are easy, but they are not better or even remotely equivalent. It’s like saying, “Finally, the masses can tell a robot to build them a table,” and saying that the expertise of those “elite” woodworkers is no longer needed.
…damn if I am tired of having to rely on “Zillow and a prayer” if I want to get a house or apartment.
And this isn’t a problem LLMs can solve. I feel for you, I do. We’re all feeling this shit, but this is a capitalism problem. Until the ultracapitalists who are making these LLMs (OpenAI, Google, Meta, xAI, Anthropic, Palantir, etc.) are no longer the drivers of machine learning, and until the ultracapitalist companies stop using AI or algorithms to decide who gets what prices/loans/rental rates/healthcare/etc., we will not see any kind of level playing field you or the author are wishing for.
You’re looking at AI, ascribing it features and achievements it doesn’t deserve, then wishing against all the evidence that it’s solving capitalism. It’s very much not, and if anything, it’s only exacerbating the problems caused by it.
I applaud your optimism—I was optimistic about it once, too—but it has shown, time and again, that it won’t lead to a society not governed by the endless chasing of profits at the expense of everyone else; it won’t lead to a society where the billionaires and the rest of us compete on equal footing. What we regular folk have gotten from them will not be their undoing.
If you want a better society where you don’t have to claw the most meager of scraps from the hand of the wealthy, it won’t be found here.
I’ll say one thing for this post and the resulting discussion, it’s caused me to fall down the rabbit hole that is AI price fixing. How else can it be that available residences increased but so did rent? And so did everything else?
We haven’t invested sufficiently in them for this too be plausible. They’re incentives haven’t been to get very ahead either.




