Interesting piece. The author claims that LLMs like Claude and ChatGPT are mere interfaces for the same kind of algorithms that corporations have been using for decades and that the real “AI Revolution” is that regular people have access to them, where before we did not.

From the article:

Consider what it took to use business intelligence software in 2015. You needed to buy the software, which cost thousands or tens of thousands of dollars. You needed to clean and structure your data. You needed to learn SQL or tableau or whatever visualization tool you were using. You needed to know what questions to ask. The cognitive and financial overhead was high enough that only organizations bothered.

Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

  • NGram@piefed.ca
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    16 hours ago

    The main post is already badly downvoted so I probably shouldn’t even bother to engage, but this whole article is actually just showing a lack of knowledge on the subject. So here goes nothing:

    Corporations have been running algorithms for decades.

    Millennia*. We can run algorithms without computers, so the first algorithm was run way earlier than decades ago. And corporations certainly were invented before the last century.

    Markets weren’t inefficient because technology didn’t exist to make them efficient. Markets were asymmetrically efficient on purpose. One side had computational power. The other side had a browser and maybe some browser tabs open for comparison shopping.

    I suppose the author has never used all of those price-watching websites that existed before 2022. I also question how they think a price optimization algorithm is useful to a person who is trying to buy, not sell, something.

    Consider what it took to use business intelligence software in 2015. […] Language models collapsed that overhead to nearly zero. You don’t need to learn a query language. You don’t need to structure your data. You don’t need to know the right technical terms. You just describe what you want in plain English. The interface became conversation.

    You still need to structure your data because you need to be able to have the LLM understand the structure of your data. In fact, it is still easy enough to cause an LLM to misinterpret data that having inconsistently-structured data is just asking for problems… not that LLMs are consistent anyway. The existence of the idea of prompt engineering means that the interface isn’t just conversation.

    The moment ChatGPT became public, people started using it to avoid work they hated. Not important work. Not meaningful work. The bureaucratic compliance tasks that filled their days without adding value to anything.

    Oh ok better just stop worrying about that compliance paperwork because the author says it’s worthless. Just dump that crude oil directly on top of the nice ducks, no point in even trying to only spill it into their pond.

    Compliance tasks are actually the most important part of work. They are what guarantee your work has worth. Otherwise you’re just an LLM – sometimes producing ok results but always wasting resources.

    People weren’t using ChatGPT to think. They were using it to stop pretending that performance reviews, status update emails, and quarterly reports required thought.

    Basically, users used it to create the layer of communication that existed to satisfy organizational requirements rather than to advance any actual goal.

    Once again with the poor examples of things. If you can’t give a thoughtful performance review for the people who work below you, you’re just horrible at your job. Performance reviews aren’t just crunching some numbers and giving people a gold star. I’m sure sometime in the future I could pipe in all of the quick chats I’ve had with coworkers in the office and tell an LLM to consider them for generating a review, but that’s still not possible. So no, performance reviews do actually require thought. Status emails and quarterly reports can be basically summarizing existing data, so maybe they don’t require much thought but they still require some. This is demonstrable by the amount of clearly LLM-generated content that have become infamous at this point for containing inaccurate info. LLMs can’t think, but a thinking human could’ve reviewed that output and stopped that content from ever reaching anyone else.

    This is very much giving me the impression the author doesn’t like telling others what they’re doing. They’d rather work alone and without interruption. I worry that they don’t work well in teams since they lack the willingness to communicate with their peers. Maybe one day they’ll realize that their peers can do work too and even help them.

    You want the cheapest milk within ten miles? You can build that.

    The first search result for “grocery price tracker” that I found is a local tracker started in 2022, before LLMs.

    You want to track price changes across every retailer in your area? You can do that now

    From searching “<country> price tracker”, I found Camel^3 which is famous for Amazon tracking and another country-specific one which has a ToS last updated in 2018. The author is describing things that could already be accomplished with a search engine.

    You want something to read every clause of your insurance policy and identify the loopholes?

    Lmao DO NOT use an LLM for this. They are not reliable enough for this.

    You want an agent that will spend forty hours fighting a medical billing error that you’d normally just pay because fighting it would cost more in time than the bill? You can have that.

    You know what? I take it all back, this is definitely proving Dystopia Inc. But seriously, that is a temporary solution to a permanent problem. Never settle for that. The real solution here is to task the LLM with sending messages to every politician and lobbyist telling them to improve the system they make for you.

    The marginal cost of algorithmic labor has effectively collapsed. Using a GPT-5.2–class model, pricing is on the order of $0.25 per million input tokens and about $2.00 per million output tokens. A token is roughly three-quarters of a word, which means one million tokens equals about 750,000 words. Even assuming a blended input/output cost of roughly $1.50 per million tokens, you can process 750,000 words for about $1.50. War and Peace is approximately 587,000 words, meaning you can run an AI across one of the longest novels ever written for around a dollar. That’s not intelligence becoming cheaper. That’s the marginal cost of cognitive labor approaching zero.

    Nevermind the irony of calling computers doing work “algorithmic labour”, this is just nonsense. Of course things built entirely on free labour are going to be monetarily cheap. Also, feeding War And Peace into an LLM as input tokens is not the same as training the LLM on it.

    We are seeing the actual cost of LLM usage unfold and you’d have to be willingly ignoring it to think it was strictly monetary. The social and environmental impact is devastating. But since the original article cites literally none of its claims, I won’t bother either.

    Institutions built their advantages on exhaustion tactics. They had more time, more money, and more stamina than you did. They could bury you in paperwork. They could drag out disputes. They could wait you out. That strategy assumed you had finite patience and finite resources. It assumed you’d eventually give up because you had other things to do.

    An AI assistant breaks that assumption.

    No, it doesn’t, unless you somehow also assume that LLMs won’t also be used against you. And you’d have to actually be dumb or have an agenda that required you to act dumb to assume that.

    Usage numbers tell the story clearly. ChatGPT reached 100 million monthly active users in two months. That made it the fastest-growing consumer application in history. TikTok took nine months to hit 100 million users. Instagram took two and a half years. The demand was obviously already there. People were apparently just waiting for something like this to exist.

    Here’s a handy little graph to show how the author is wrong: Time to 100M users. I’m sorry, I broke my promise about not citing anything. Notice how all of the time spans for internet applications trend downwards as time increases. TikTok took 9 months 7 years before ChatGPT was released. I bet the next viral app will be even faster than ChatGPT. That’s not an indicator of demand, that’s an indicator of internet accessibility. (I’m ignoring Threads because they automatically create 100M users from their Instragram accounts in 5 days, which is a measure of their database migration capabilities and nothing else.)

    Venture capital funding for generative AI companies reached $25.2 billion in 2023 according to PitchBook data. That was up from $4.5 billion in 2022. Investment wasn’t going into making better algorithms. It was going into making those algorithms accessible.

    I’m sorry, what? LLMs are an algorithm. Author clearly does not know what they are talking about.

    DoNotPay, an AI-powered consumer advocacy service, claimed to help users fight more than 200,000 parking tickets before the company pivoted to other services. LegalZoom reported that AI-assisted document preparation reduced the time required to create basic legal documents by 60% in 2023.

    I thought LLMs were supposed to be some magic interface for individuals. The author is describing institutions. You know, the thing the author started out bashing for controlling all the algorithms and using them against the common folk who didn’t have those algorithms. This is exactly the same thing, just replace algorithm with AI.

    The credential barrier still exists. You can’t get a prescription from ChatGPT. The legal liability still flows through licensed professionals. The system still requires human gatekeepers. The question is how long those requirements survive when the public realizes they’re paying $200 for a consultation that an AI handles better for pennies.

    Indeed, that will be an interesting thing to see once AI can actually handle it better and for cheaper. Though I wouldn’t count on in anytime soon. Don’t forget the AI at that stage will still have to compensate the human doctors who wrote the data it was trained on.

    Oh, I just about hit the character limit. I guess I’ll stop there.
    Remember folks, don’t let your LLM write an article arguing for replacing everyone with LLMs. All it proves is that you can be replaced by an LLM. Maybe focus on some human pursuits instead.