There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.
I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.
I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.
Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.
Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.
I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.
Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.
There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.
I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.
I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.
Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.
Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.
I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.
Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.
Technology is knowledge, and we are millions of years away from reaching the end of possible knowledge.
Also, humans already exist, so we know its possible.