I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn’t saying much: given infinite time anything will happen. Even extinction and total collapse aren’t enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you’re voicing it as though it’s a certain direction of human technological progress which is frankly untrue. You’ve just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world’s labour, then it will not ever happen in any meaningful sense of the word “ever”.
PS - to elaborate a bit on that “meaningful sense of the word ever” bit, I don’t want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there’s the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don’t believe it’s inevitable, in fact I believe it’s not super likely given where we’re at and the economic, scientific and military incentives I’m aware of.
I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we’ve survived global warming so if we’re still there our incentives look nothing then like they do now, and I don’t believe in it then either. I think it’s at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.
Honestly I agree with gbzm here. ‘I can’t see why I shouldn’t be possible’ is a far cry from ‘it’s inevitable’… And I’d hardly say we’re sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)
Ppl definitely are selling natural language interfaces as if they’re intelligent. It’s convincing, I guess, to some. It’s an illusion though
We can’t know that I’m wrong or you’re wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y’know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.
I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.
And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.
Yeah see I don’t agree with that base premise, that it’s as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?
It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.
I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn’t saying much: given infinite time anything will happen. Even extinction and total collapse aren’t enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you’re voicing it as though it’s a certain direction of human technological progress which is frankly untrue. You’ve just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world’s labour, then it will not ever happen in any meaningful sense of the word “ever”.
PS - to elaborate a bit on that “meaningful sense of the word ever” bit, I don’t want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there’s the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don’t believe it’s inevitable, in fact I believe it’s not super likely given where we’re at and the economic, scientific and military incentives I’m aware of. I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we’ve survived global warming so if we’re still there our incentives look nothing then like they do now, and I don’t believe in it then either. I think it’s at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.
Honestly I agree with gbzm here. ‘I can’t see why I shouldn’t be possible’ is a far cry from ‘it’s inevitable’… And I’d hardly say we’re sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)
Ppl definitely are selling natural language interfaces as if they’re intelligent. It’s convincing, I guess, to some. It’s an illusion though
This discussion isn’t about LLMs per se.
However, I hope you’re right. Unfortunelately, I’ve yet to meet anyone able to convince me that I’m wrong.
We can’t know that I’m wrong or you’re wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y’know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.
I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.
And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.
Yeah see I don’t agree with that base premise, that it’s as simple as information processing. I think sentience - and, therefore, intelligence - is a more holistic process that requires many more tightly-coupled external feedback loops and an embedding of the processes in a way that makes the processing analogous to the world as modelled. But who can say, eh?
It’s not obvious to me that sentience has to come along for the ride. It’s perfectly conceivable that there’s nothing it’s like to be a superintelligent AGI system. What I’ve been talking about this whole time is intelligence — not sentience, or what I’d call consciousness.