The chess opponent on Atari is AI - we’ve had AI systems for decades.
An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.
Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never “discover” AGI. This comment is brought to you by optimistic existentialism
LLMs aren’t intelligence. We’ve had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.
LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.
It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.
I haven’t said a word about LLMs.
They are the closest things to AI that we have. The so called LRMs fake their reasoning.
They do not think or reason. We are at the very best decades away from anything resembling an AI.
The best LLMs can do is a mass effect(1) VI and that is still more than a decade away
The chess opponent on Atari is AI - we’ve had AI systems for decades.
An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.
Decades is plenty of time for society to experience a collapse or major setback that prevents AGI from being discovered in the lifetime of any currently alive human. Whether that comes from war, famine, or natural phenomena induced by man-made climate change, we have plenty of opportunities as a species to take the offramp and never “discover” AGI. This comment is brought to you by optimistic existentialism
No, the first chatbots didn’t have neural networks inside. They didn’t have intelligence.
LLMs aren’t intelligence. We’ve had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.
A turing test has nothing to do with intelligence.
What is your point?
You define intelligence wrong.
I didn’t say turing tests had anything to do with intelligence. I didn’t define intelligence at all. What are you even talking about?