I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.
The thing is I’m not assuming substrate dependence. I’m not saying there’s something uniquely mysterious about the biological brain, I’m saying what we know about “intelligence” right now is that it’s an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it’s not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.
The things we’ve done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don’t understand how brains work enough to refine this simplification very well, and we don’t know much about the formation of cognitive processes relevant to “intelligence” either. Yet you assert it’s a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don’t, but they’re completely different physical systems), etc.
The same arguments you’re making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it’s possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren’t enough of an argument that our rockets can reach them, there’s no telling that they will ever be ; our economies might well simply lose interest before then.
I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn’t saying much: given infinite time anything will happen. Even extinction and total collapse aren’t enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you’re voicing it as though it’s a certain direction of human technological progress which is frankly untrue. You’ve just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world’s labour, then it will not ever happen in any meaningful sense of the word “ever”.
PS - to elaborate a bit on that “meaningful sense of the word ever” bit, I don’t want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there’s the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don’t believe it’s inevitable, in fact I believe it’s not super likely given where we’re at and the economic, scientific and military incentives I’m aware of.
I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we’ve survived global warming so if we’re still there our incentives look nothing then like they do now, and I don’t believe in it then either. I think it’s at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.
Honestly I agree with gbzm here. ‘I can’t see why I shouldn’t be possible’ is a far cry from ‘it’s inevitable’… And I’d hardly say we’re sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)
Ppl definitely are selling natural language interfaces as if they’re intelligent. It’s convincing, I guess, to some. It’s an illusion though
We can’t know that I’m wrong or you’re wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y’know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.
I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.
And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.
I don’t see any reason to assume substrate dependence either, since we already have narrowly intelligent, non-biological systems that are superhuman within their specific domains. I’m not saying it’s inconceivable that there’s something uniquely mysterious about the biological brain that’s essential for true general intelligence - it just seems highly unlikely to me.
The thing is I’m not assuming substrate dependence. I’m not saying there’s something uniquely mysterious about the biological brain, I’m saying what we know about “intelligence” right now is that it’s an emergent property observed in brains that have been in interaction with a physical and natural environment through complex sensory feedback loops, materialized by the rest of the human body. This is substrate independent, but the only thing that rocks can do for sure is simulate this whole system, and good simulations of complicated systems are not an easy feat at all, and it’s not at all certain that we ever be able to do it without it requiring too much resources for it to be worth the hassle.
The things we’ve done that most closely resemble human intelligence in computers are very drastic oversimplifications of how biological brains work, sprinkled with mathematical translations of actual cognitive processes. And right now they appear very limited, even though a lot of resources - physical and economic - have been injected into them. We don’t understand how brains work enough to refine this simplification very well, and we don’t know much about the formation of cognitive processes relevant to “intelligence” either. Yet you assert it’s a certainty that we will, that we will encode it in computers, and that the result will have a bunch of properties of current software, easily copyable and editable (which the human-like intelligences we know are not at all), not requiring more power than is output by the Sun, (which humans don’t, but they’re completely different physical systems), etc.
The same arguments you’re making could be made to say, in 1969 after the moon landing, that the human race will definitely colonize the whole solar system. We know it’s possible so it will happen at some point is not how technology works, it also needs to be profitable enough for enough industry to be injected in the problem to solve it, and for the result to live up to profitability expectations. Right now no AI firm is even remotely profitable, and the resources in the Kuiper belt or the real estate on Mars aren’t enough of an argument that our rockets can reach them, there’s no telling that they will ever be ; our economies might well simply lose interest before then.
I’m not claiming that AGI will necessarily be practical or profitable by human standards - just that, given enough time and uninterrupted progress, it’s hard to see how it wouldn’t happen.
The core of my argument isn’t about funding or feasibility in the short term, it’s about inevitability in the long term. Once you accept that intelligence is a physical process and that we’re capable of improving the systems that simulate it, the only thing that can stop us from reaching AGI eventually is extinction or total collapse.
So, sure - maybe it’s not 10 years away. Maybe not 100. But if humanity keeps inventing, iterating, and surviving, I don’t see a natural stopping point before we get there.
I get it, the core of your argument is given enough time it will happen, which isn’t saying much: given infinite time anything will happen. Even extinction and total collapse aren’t enough, infinite time means a thinking computer will just emerge fully formed from quantum fluctuations.
But you’re voicing it as though it’s a certain direction of human technological progress which is frankly untrue. You’ve just concocted a scenario for technological progress in your head by extrapolating from the current state of it, and you present it as a certainty. But anyone can do the same for equally credible scenarios without AGI. For instance, if the only way to avoid total collapse is to stabilize energy consumption and demographic growth and we somehow manage it, then if making rocks think costs 10^20W and the entire world’s labour, then it will not ever happen in any meaningful sense of the word “ever”.
PS - to elaborate a bit on that “meaningful sense of the word ever” bit, I don’t want to nitpick but some time scales do make asteroid impacts irrelevant. The Sun will engulf the earth in about 5 billion years. Then there’s the heat death of the universe. In computing problems you get millions of years popping here and there for problems that feel like they should be easy
In my view, we’re heavily incentivized to develop AGI because of the enormous potential benefits - economic, scientific, and military. That’s exactly what worries me. We’re sprinting toward it without having solved the serious safety and control problems that would come with it.
I can accept that the LLM approach might be a dead end, or that building AGI could be far harder than we think. But to me, that doesn’t change the core issue. AGI represents a genuine civilization-level existential risk. Even if the odds of it going badly are small, the stakes are too high for that to be comforting.
Given enough time, I think we’ll get there - whether that’s in 2 years or 200. The timescale isn’t the problem; inevitability is. And frankly, I don’t think we’ll ever be ready for it. Some doors just shouldn’t be opened, no matter how curious or capable we become.
Right. I don’t believe it’s inevitable, in fact I believe it’s not super likely given where we’re at and the economic, scientific and military incentives I’m aware of. I think the people who are sprinting now do so blindly, not knowing where or how far it is. I think 2 years is a joke or a lie Sam Altman tells gullible investors, and 200 years means we’ve survived global warming so if we’re still there our incentives look nothing then like they do now, and I don’t believe in it then either. I think it’s at most a maybe on the far, far horizon of the thousands+ years in a world that looks nothing like ours, and in the meantime we have way more pressing problems than the snake oil a few salesmen are trying desperately to sell. Like the salesmen themselves, for example.
Honestly I agree with gbzm here. ‘I can’t see why I shouldn’t be possible’ is a far cry from ‘it’s inevitable’… And I’d hardly say we’re sprinting towards it, either. There are, in my view, dozens of absurdly difficult problems, any one of which may be insoluble, that stands between us and agi. Anyone telling you otherwise is selling something or already bought in ;)
Ppl definitely are selling natural language interfaces as if they’re intelligent. It’s convincing, I guess, to some. It’s an illusion though
This discussion isn’t about LLMs per se.
However, I hope you’re right. Unfortunelately, I’ve yet to meet anyone able to convince me that I’m wrong.
We can’t know that I’m wrong or you’re wrong I guess. I am aware of the context of the discussion and mention LLMs as a reason the hype has picked back up. The processing requirements for true intelligence appear, to me, to be far outside the confines of what silicon chips are even theoretically capable of. Seems odd to me you should ever have a full AGI before, say, cyborgs (y’know, semi-biological hybrids). We shall see how things develop over the next half a century or so, and perhaps more light shall be shed.
I’ve been worried about this since around 2016 - long before I’d ever heard of LLMs or Sam Altman. The way I see it, intelligence is just information processing done in a certain way. We already have narrowly intelligent AI systems performing tasks we used to consider uniquely human - playing chess, driving cars, generating natural-sounding language. What we don’t yet have is a system that can do all of those things.
And the thing is, the system I’m worried about wouldn’t even need to be vastly more intelligent than us. A “human-level” AGI would already be able to process information so much faster than we can that it would effectively be superintelligent. I think that at the very least, even if someone doubts the feasibility of developing such a system, they should still be able to see how dangerous it would be if we actually did stumble upon it - however unlikely that might seem. That’s what I’m worried about.