Intelligence is possible, as proven by the existence of it in the Biological world.
So it makes sense that as Technology evolves we become able to emulate the Biological World in that, just as we have in so many other things, from flight to artificial hearths.
However, there is no guarantee that Mankind will not go extinct before that point is reached, nor there is any guarantee that our Technological progression won’t come to an end (though at the moment we’re near a peak period in terms of speed of Technological progression), so it is indeed true that we don’t know it’s coming: we as a species might not be around long enough to make it come or we might high a ceiling in our Technological development before our technology is capable of creating AGI.
Beyond the “maybe one day” view, personally I think that believing that AGI is close is complete total pie in the sky fantasism: this supposed path to it that were LLMs turned out to be a dead end that was decorated with a lot of bullshit to make it seem otherwise, what the technology underlying it does really well - pattern recognition and reproduction - has turned out to not be enough by itself to add up to intelligence and we don’t actually have any specific technological direction in the pipeline (that I know of) which can crack that problem.
Intelligence is possible, as proven by the existence of it in the Biological world.
So it makes sense that as Technology evolves we become able to emulate the Biological World in that, just as we have in so many other things, from flight to artificial hearths.
However, there is no guarantee that Mankind will not go extinct before that point is reached, nor there is any guarantee that our Technological progression won’t come to an end (though at the moment we’re near a peak period in terms of speed of Technological progression), so it is indeed true that we don’t know it’s coming: we as a species might not be around long enough to make it come or we might high a ceiling in our Technological development before our technology is capable of creating AGI.
Beyond the “maybe one day” view, personally I think that believing that AGI is close is complete total pie in the sky fantasism: this supposed path to it that were LLMs turned out to be a dead end that was decorated with a lot of bullshit to make it seem otherwise, what the technology underlying it does really well - pattern recognition and reproduction - has turned out to not be enough by itself to add up to intelligence and we don’t actually have any specific technological direction in the pipeline (that I know of) which can crack that problem.