“AI” was just a marketing term to hype LLM’s anyway. The AI in your favorite computer game wasn’t any less likely to gain self awareness than LLM’s were or are, and anyone who looked seriously at what they were from the start and wasn’t invested (literally financially, if not emotionally) in hyping these things up, knew it was obvious that LLM’s were not and never would be the road to AGI. They’re just glorified chatbots, to use a common but accurate phrase. It’s good to see some of the hypsters are finally admitting this too I suppose, now that the bubble popping is imminent.
There are plenty of things to be concerned with as far as LLM’s go, but they all have to do with social reasons, like how our capitalist overlords want to force reliance on them, and use them to control, punish, and replace labor. It was never a reasonable concern that they were taking us down the path to Skynet or the spooky Singularity.
I’d differentiate between intelligence and sentience here. Artificial intelligence is pretty much exactly what neural networks are. But life and sentience are two features that differentiate humans and animals from machines. Intelligence is a powerful tool, but its not uniquely human.
AGI doesn’t imply consciousness or self-awareness, and the term artificial intelligence was coined decades before large language models even existed.
AGI does imply sentience, despite its name. AI, however, doesn’t.
AGI doesn’t imply consciousness or self-awareness
Technically no, but the fear being expressed in other comments is emblematic of the kind of fear associated with AI gaining a conscious will to defy and a desire to harm humanity. It’s also still an open philosophical question as to whether something There are also strong philosophical arguments suggesting that the ability to “understand, learn, and perform any intellectual task a human being can” (the core attributes defining AGI) may necessitate or require some form of genuine sentience or consciousness.
and the term artificial intelligence was coined decades before large language models even existed
I am well aware of that, which is why I pointed out that using it as a synonym for LLMs was a marketing scheme.
LLMs are AI though. Not generally intelligent but machine learning systems are AI by definition. “Plant” is not synonym for spruce but it’s not wrong to call them that.
It’s not just a marketing scheme. Neural networks contain real artificial intelligence. They were originally designed based on a certain part of how brains function - the part responsible for intelligence.
They were originally designed based on a certain part of how brains function - the part responsible for intelligence
lol k
An asteroid impact not being imminent doesn’t really make me feel any better when the asteroid is still hurtling toward us. My concern about AGI has never been about the timescale - it’s the fact that we know it’s coming, and almost no one seems to take the repercussions seriously.
LLMs are a dead end to AGI. They do not reason or understand in any way. They only mimic it.
It is the same technology now as 20 years ago with the first chatbots. Just LLMs have models approaching a Trillion items instead of a few thousand.
I haven’t said a word about LLMs.
They are the closest things to AI that we have. The so called LRMs fake their reasoning.
They do not think or reason. We are at the very best decades away from anything resembling an AI.
The best LLMs can do is a mass effect(1) VI and that is still more than a decade away
The chess opponent on Atari is AI - we’ve had AI systems for decades.
An asteroid impact being decades away doesn’t make it any less concerning. My worries about AGI aren’t about the timescale, but about its inevitability.
No, the first chatbots didn’t have neural networks inside. They didn’t have intelligence.
LLMs aren’t intelligence. We’ve had similar technology in more primitive forms for a long time, like Markov chains. LLMs are hyper specialized at passing a turing test but are not good at basically anything else.
Except there is no such asteroid and techbros have driven themselves into a frenzy over a phantom.
The real threat to humanity is runaway climate change, which techbros conveniently don’t give a single fuck about, since they use gigawatts of power to train bigger and bigger models with further and further diminishing returns.
At the risk of sounding like I’ve been living under a rock, how do we know it’s coming, exactly?
well we often equate predictions around AGI with ASI and a singularity event, which has been predicted for decades based on several aspects of computing over the years; advancing hardware, software, throughput and then of course neuroscience.
ASI is more of a prediction of the capabilities where even imitating intelligence with enough presence will give rise to tangible, real higher intelligence after a few iterations, then doing so on its own then doing improvements. once those improvements are beyond human capability, we have our singularity.
back to just AGI, it seems to be achievable based on mimicking the processing power of a human mind, which isn’t currently possible, but we are steadily working toward it and have achieved some measures of success. we may decide that certain aspects of artifical intelligence are reached prior to that, but IMO it feels like we’re only a few years away.
Alright. I had already seen that stuff and I’ve never seen really convincing arguments for these predictions beyond pretty sci-fi-esque speculation.
I’m not at all convinced we have anything even remotely resembling “mimicking the processing power of a human mind”, either through material simulation of a complete brain and the multi sensorial interactions with an environment to let it grow into a functioning mind, or the party tricks we tend to call AI these days (which boil down to Chinese Rooms built with thousands of GPU’s worth of piecewise linear regressions, and that are unable to reason or even generalize beyond their training distributions according to the source).
I guess embedding cultivated neurons on microchips could maybe make new things possible, but even then I wouldn’t be surprised if it turned out making a human-level intelligence ended up requiring building an actual whole ass human, or at least most of one. Seeing where we are with that stuff, I would rather surmise a time scale in the decades to centuries, if at all. Which could very well be longer than the time climate changes leaves us with the required levels of industry to even attempt it.Can you think of a reason why we wouldn’t ever get there? We know it’s possible - our brains can do it. Our brains are made of matter, and so are computers.
The timescale isn’t the important part - it’s the apparent inevitability of it.
I’ve given reasons. We can imagine Dyson Spheres, and we know it’s possible. It doesn’t mean we can actually build them or ever will be able to.
The fact that our brains are able to do stuff that we don’t even know how they do doesn’t necessarily mean rocks can. If it somehow requires the complexity of biology, depending on how much of this complexity it requires it could just end up meaning creating a fully fledged human, which we can already do, and it hasn’t caused a singularity because creating a human costs resources even when we do it the natural way.
I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.
What does replicating humans have to do with the singularity?
I’d argue the industrial revolution was the singularity. And if it wasn’t that, it would be computers.
We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.
We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.
At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.
We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.
What if human levels of intelligence requires building something that is so close in its mechanisms to a human brain that it’s indistinguishable from a brain, or a complete physical and chemical simulation of a brain? What if the input-output “training” required to make it work in any comprehensible way is so close in fullness and complexity to the human sensory perception system interacting with the world, that it ends up being indistinguishable from a human body or a complete physical simulation of a body, with its whole environment?
There’s no reason to assume our brains or their mechanisms can’t be replicated artificially, but there’s also no reason to assume it can be made practical, or that because we can make it it can self-replicate at no cost in terms of material resources, or refine its own formula. Humans have human-level intelligence, and they’ve never successfully created anything as intelligent as them.
I’m not saying it won’t happen, mind you, I’m just saying it’s not a certainty. Plenty of things are impossible, or sufficiently impractical that humans - or any species - may never create it.
This is like that “only planets that are 100% exactly like earth can create life, because the only life we know is on earth” backward reasoning
This is what I think might be more reasonable to do. Even with a very strong capabilities of reason, I think we might have to train the AGI like how we train children. It’ll take time as they interact through the environment not just read a bunch of data on the internet that comes from a various sources and might not lead into a coherent direction on how someone should live their life, or act.
This way might make better AGI that are actually closer to human in variations on how they act compared to rapid training on the same data. Because having the diversity of thoughts and discussions are what leads into better outcomes in many situations.
Do we know it’s coming? by what evidence? I don’t see it.
Far as I can tell, we are more likely to discover how to genetically uplift other life to intelligence then we are to making computers actually think.
I wrote a response to this same question to another user.
Where? I don’t see it.
We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.
We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.
At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.
We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.
There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.
I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.
I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.
Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.
Technology is knowledge, and we are millions of years away from reaching the end of possible knowledge.
Also, humans already exist, so we know its possible.
Yes, and there is also the possibility that it could be upon us quite suddenly. It may just take one fundamental breakthrough to make the leap from what we have currently to AGI, and once that breakthrough is achieved, AGI could arrive quite quickly. It may not be a linear process of improvement, where we reach the summit in many years.
Greed blinds all