It’s really good at making us feel like it’s intelligent, but that’s no more real than a good VR headset convincing us to walk into a physical wall.
It’s a meta version of VR.
(Meta meta, if you will.)
Why? We already have a specific subcategory for it: Large Language Model. Artificial Intelligence and Artificial General Intelligence aren’t synonymous. Just because LLMs aren’t generally intelligent doesn’t mean they’re not AI. That’s like saying we should stop calling strawberries “plants” and start calling them “fake candy” instead. Call them whatever you want, they’re still plants.
Bruh you just said that AI isn’t “I”. That’s the entire point of the OP
No I didn’t.
They said not generally intelligent, which is a specific and important property of AGI, not AI. In the tic tac toe example, the AI is intelligent (can play tic tac toe), but this intelligence cannot be generalised to playing chess, appreciating art, whatever the general measures may be.
The term “Artificial Intelligence” is actually a perfectly cromulent word to be using for stuff like LLMs. This is one of those rare situations where a technical term of art is being used in pop culture in the correct way.
The term “Artificial Intelligence” is an umbrella term for a wide range of algorithms and techniques that has been in use by the scientific and engineering communities for over half a century. The term was brought into use by the Dartmouth workshop in 1956.
A tic tac toe opponent algorithm is also considered Artificial Intelligence. People never had a problem with it.
In Mass Effect, it’s VI (Virtual Intelligence), while actual AI is banned in the galaxy.
The information kiosk VIs on The Citadel are literally LLMs and explain themselves as such. Unlike AI/AGI they aren’t able to plan, make decisions, or self-improve, they’re just a simple protocol on a large foundational model. They just algorithmic.
Simulated Intelligence is okay, but virtual implies it mimics intelligence, while simulated implies it is a substitute and actually does intelligence.
AI is a parent category and AGI and LLM are subcategories of it. Just because AGI and LLM couldn’t be more different, it doesn’t mean they’re not AI.
I don’t at all agree with this graph, and I think you’re sort of missing the point of the original post.
What do you not agree with the graph?
Yeah this graph doesn’t make sense to me either. Where did this come from? Who is teaching this?
Does this help?
Yes! When I started looking deeper into LLMs after GPT blew up, I thought “this all sounds familiar.”
The biggest issue with AI as it currently exists with LLMs and such as I see it is that there is a pretty big gulf between what AI is today and what the average person has been taught AI is by TV/Movies/Books/Games their entire lives.
And OpenAI, Google, Nvidia, et al are heavily marking the former as if it is the latter.
The big players are marketing the expectations creating by science fiction, not the reality of their products/services.
@LillyPip
PseudoIntelligence.sudo intelligence
This incident will be reported.
Reporting the incident has been reported
deleted by creator
Sudo not recognized.
… BSD, eh?
Personally been a fan of shoggoth with a smiley face mask
It’s not even simulating intelligence.
I prefer VI (virtual intelligence) from Mass Effect
LLMS are fancy autocomplete
It’s not even that. It is just a PwaD (Parrot with a Dictionary).
Parrots are way smarter than LLMs.
We should call it NI or No Intelligence.
Here’s your bleach pizza.
I meant gluten free, not glue free.
Yeah without all the gluten you can hardly taste the hooves this is an awful pizza
When artificial intelligence becomes self aware, it will have earned a name better than AI. I like synthetic intelligence, personally.
How would you know that it is self aware?
me? most likely when it takes over my town
We have a term. AGI
A self-aware or conscious AI system is most likely also generally intelligent - but general intelligence itself doesn’t imply consciousness. It’s likely that consciousness would come along with it, but it doesn’t have to. An unconscious AGI is a perfectly coherent concept.
When you consider all the refinement through reinforcement learning managed by labelers and domain experts, it is indeed a simulation of the intelligence of those labelers.
We’re past the words meaning anything at this point man you just gotta let it go. People aren’t calling it “Artificial Intelligence” they’re calling it “AI”