We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.
We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.
At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.
We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.
There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.
I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.
I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.
Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.
Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.
I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.
Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.
Do we know it’s coming? by what evidence? I don’t see it.
Far as I can tell, we are more likely to discover how to genetically uplift other life to intelligence then we are to making computers actually think.
I wrote a response to this same question to another user.
Where? I don’t see it.
There are limits to technology. Why would we assume infinite growth of technology, when nothing else we have is infinite? It’s not like the wheel is getting more round over time. We made it out of better materials, but it still has limits to it’s utility. All our computers are computing 1s and 0s and adding more of those per seocnd does not seem to do anything to make them smarter.
I would worry about ecological collapse a lot more then this that’s for sure. That’s something that current shitty non-smart AI can achieve if they keep making data centers and drinking ourwater.
I don’t see any reason to assume humans are anywhere near the far end of the intelligence spectrum. We already have narrow-intelligence systems that are superhuman in specific domains. I don’t think comparing intelligence to something like a wheel is fair - there are clear geometric limits to how round a wheel can be, but I’ve yet to hear any comparable explanation for why similar limits should exist for intelligence. It doesn’t need to be infinitely intelligent either - just significantly more so than we are.
Also, as I said earlier - unless some other catastrophe destroys us before we get there. That doesn’t conflict with what I said, nor does it give me any peace of mind. It’s simply my personal view that AGI or ASI is the number one existential risk we face.
Okay, granted. But if we are on the stupid side of the equation, why would we be able to make something smarter then us? One does not follow from the other.
I also disagree that we have made anything that is actually intelligent. A computer can do math billions of times faster then a human can, but doing math is not smarts. Without human intervention and human input, the computer would just idle and do nothing. That is not intelligence. At no point has code shown the ability to self-improve and grow and the current brand of shitAI is no different. They call what they do to it training, but it’s really just telling it how to weigh the reams of data it’s eating and without humans, it would not do even that.
Ravens and Octopi can solve quite complex puzzles. Are they intelligent? What is even the cutoff for intelligence? We don’t even have a good definition for what intelligence is that encompasses everything. People cite IQ, which is obviously bunk. People try to section it into several types of intelligence, social, logical and so on. If we don’t even know what the objective definition of intelligence is, I am not worried about us creating it from whole cloth.
Technology is knowledge, and we are millions of years away from reaching the end of possible knowledge.
Also, humans already exist, so we know its possible.