
I’m not. You can’t lose trust on something if you never trusted it to begin with.
I. Talent churn reveals short AGI timelines are wish, not belief
Trying to build AGI out of LLMs and similar is like trying to build a house, by randomly throwing bricks. No cement, no foundation, just the bricks. You might want to get some interesting formation of bricks, sure. But you won’t get a house.
And yes, of course they’re bullshitting with all this “AGI IS COMING!”. Odds are the people in charge of those companies know the above. But lying for your own benefit, when you know the truth, is called “marketing”.
II. The focus on addictive products shows their moral compass is off
“They”, who? Chatbots are amoral, period. Babbling about their moral alignment is like saying your hammer or chainsaw is morally bad or good. It’s a tool dammit, treat it as such.
And when it comes to the businesses, their moral alignment is a simple “money good, anything between money and us is bad”.
III. The economic engine keeping the industry alive is unsustainable
Pretty much.
Do I worry that the AI industry is a quasi-monopoly? No, I don’t understand what that means.
A quasi-monopoly, in a nutshell, is when a single entity or group of entities have an unreasonably large control over a certain industry/market, even if not being an “ackshyual” monopoly yet.
A funny trait of the fake free-market capitalist that O’Reilly warns us about is that their values are always very elevated and pure, but only hold until the next funding round.
That’s capitalism. “I luuuv freerum!” until it gets in the way of the money.
IV. They don’t know how to solve the hard problems of LLMs
Large language models (LLMs) still hallucinate. Over time, instead of treating this problem as the pain point it is, the industry has shifted to “in a way, hallucinations are a feature, you know?”
Or rather, they shifted the bullshit. They already knew it was an insolvable problem…
…because hallucinations are simply part of the LLM doing what it’s supposed to do. It doesn’t understand what it’s outputting; it doesn’t know if glue is a valid thing to add to a pizza, or if humans should eat rocks. It’s simply generating text based on the corpus fed into it, plus some weighting.
V. Their public messaging is chaotic and borders on manipulative
O rly.
Stopped reading here. It’s stating the obvious, and still missing the point.
It’s basically my experience with translation, too: asking a LLM is a decent way to look for potential ways to translate a specific problematic word, so you can look them up in a dic and see which one is the best. It’s also a decent way to generate simple conjugation/declension tables. But once you tell it to translate any chunk of meaningful text, there’s a high chance it’ll shit itself, and output something semantically, pragmatically, and stylistically bad.