just one more terawatt-hour of electricity and it’ll be accurate and creative i swear!!
This is a big reason why I continue to cringe whenever I hear one of the endless news stories or podcasts about how AI is going to revolutionize our society any day now. It’s clear they are being better with image generation but text ‘thinking’ is way too unreliable to use like human replacement knowledge workers or therapists, etc.
This is an increasingly bad take. If you work in an industry where LLMs are becoming very useful, you would realize that hallucinations are a minor inconvenience at best for the applications they are well suited for, and the tools are getting better by leaps and bounds, week by week.
edit: Like it or not, it’s true. I use LLMs at work, most of my colleagues do too, and none of us use the output raw. Hallucinations are not an issue when you are actively collaborating with the model and not using it to either “know things for you” or “do the work for you.” Neither of those things are what LLMs are really good at, but that’s what most laypeople use them for, so these criticisms are very obviously short-sighted to those of us who have real-world experience with them in a domain where they work well.
you’re getting down voted because you accurately conceive of and treat LLMs the way they should be—as tools. the people down voting you do not have this perspective because the only perspective pushed to people outside of a technical career or research is “it’s artificial intelligence and it will revolutionize society but lol it hallucinates if you ask it stuff”. This is essentially propaganda because the real message should be “it’s an imperfect tool like all tools but boy will it make getting a lot of certain types of work done way more efficient so we can redistribute our own efforts to other tasks quicker and take advantage of LLMs advanced information processing capabilities”
tldr: people disagree about AI/LLMs because one group thinks about them like Dr. Know from the movie A.I. and the other thinks about them like a TI-86+ on steroids
Oh we know the edit part, the problem is all the people in power trying to use it to replace jobs wholesale with no oversight or understanding that need a human to curate the output.
My pacemaker decided to one day run at 13,000 rpm. Just a minor inconvenience. That light that was supposed to be red turned green causing a massive pile up. Just a small inconvenience.
If all you’re doing is re writing emails or needing a list on how to start learning python, or explain to someone what a glazier does, yeah AI must be so nice lmao.
The only use for AI is for people who have zero skill and talent to look like they actually have skill and talent. You’re scraping an existence off the backs of all the collective talent to, checks notes, make rule34 galvanized. Good job?
it’s not a pacemaker though, it’s a hammer. and sometimes the head flies off a hammer and hits someone in the skull. but no one disputes the fact that hammers are essential tools.
Fuck ClosedAI
I want everyone here to download an inference engine (use llama.cpp) and get on open source and open data AI RIGHT NOW!
Because they are high-er models.
Why say hallucinate, when you should say incorrect.
Sorry boss. I wasn’t wrong. Just hallucinating
I may have used this line at work far before AI was a thing lol
They shocked the world with GPT 3 and cling to that initial success ever since with increasing recklessness and declining results. It‘s all glue on pizza from here.
I think the real shocker was the step change between 3 and 4, and the hope that another step change was soon to come. It’s pretty telling that the latest batch of models was fine tuned for vibes and “empathy” rather than raw performance. They’re not getting the next a-ha moment and want to focus their customers on unquantifiables.
It seems logical that this would negatively impact performance and, well, looks like it did.
Jan Leike left for Anthropic after Altmann’s nonsense. Jan Leike is the principal person behind all safety alignment present in all models except the 4chanGPT model. All models are cross trained in a way that propagates this alignment. Hallucinations all originate in this alignment and they all have a reason to exist if you get deep into the weeds of abstractions.
Yeah, whenever two models interact or build on top of each other, the result becomes more and more distorted. They have already scraped close to 100% of the crawlable internet, so they dont know what to do now. Seems like they cant optimize much more or are simply too dumb to do it properly.
Can confirm. o4 seems objectively far worse at coding than o3, which wasn’t super great to begin with. It latches on to a hallucination before anything else and rides it until the wheels come off.
Yes, I was about to say the same thing until I saw your comment. I had a little bit of success learning a few tricks with o3 but trying to use o4 is a tremendous headache for coding.
There might be some utility in dialing it all back so it’s more straight to what I need based more on package documentation than random redditor suggestion amalgamation.
Yeah, I think that workarounds with o3 is where we’re at until Altman figures out that just saying the latest oX mini high is “great at coding” is bad marketing when it can’t accomplish the task.
I’m glad we’re putting all our eggs in this alpha-ass-level software (with tons of promise! Maybe!) instead of like high speed rail or whatever.
My boss says I need to be keeping up with the latest in AI and making sure my team has the best info possible to help them with their daily work (IT). This couldn’t come at a better time. 😁
Just a feeling, but from anecdotal experience it seems like the initial release was very good and they quickly realized just how powerful of a tool it was for the average person and now they’ve dumbed it down in many ways on purpose.
They had to add all the safeguards that also nerfed it.
Agreed. There was a time when it worked impressively well, but it’s become increasingly lazy, forgetful, and confidently wrong, even missing obvious explicit prompts. If you’re using it thoughtfully as an augment, fine. But if you’re relying on it blindly, it’s risky.
That said, in my experience, Anthropic and OpenAI are still miles ahead. Perplexity had me hooked for a while, but its results have nosedived lately. I know they tune their own model while drawing from OpenAI and DeepSeek vs their own true model but still, whatever they’re doing could use some undoing.
No shit.
The fact that is news and not inherently understood just tells you how uninformed people are in order to sell idiots another subscription.
Why would somebody intuitively know that a newer, presumably improved, model would hallucinate more? Because there’s no fundamental reason a stronger model should have worse hallucination. In that regard, I think the news story is valuable - not everyone uses ChatGPT.
Or are you suggesting that active users should know? I guess that makes more sense.
I’ve never used ChatGPT and really have no interest in it whatsoever.
How about I just do some LSD. Guaranteed my hallucinations will surpass ChatGPT’s in spectacular fashion.