Does that mean it will never be real? No, absolutely not. It’s not theoretically impossible. It’s quite practically possible, and we inch that way slowly, but by bit, every year.
It’s like saying self-driving cars are impossible in the '90s. They aren’t impossible. You just don’t have a solution for them now, but there’s nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.
It’s definitely going to happen. It’s just not happening right now.
AGI being possible (potentially even inevitable) doesn’t mean that AGI based on LLMs is possible, and it’s LLMs that investors have bet on. It’s been pretty obvious for a while that certain problems that LLMs have aren’t getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It’s pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that’s what the article’s discussing.
“what if the obviously make-believe genie wasn’t real”
capitalists are so fucking stupid, they’re just so deeply deeply fucking stupid
I mean sure, yeah, it’s not real now.
Does that mean it will never be real? No, absolutely not. It’s not theoretically impossible. It’s quite practically possible, and we inch that way slowly, but by bit, every year.
It’s like saying self-driving cars are impossible in the '90s. They aren’t impossible. You just don’t have a solution for them now, but there’s nothing about them that makes it impossible, just our current technology. And then look it today, we have actual limited self-driving capabilities, and completely autonomous driverless vehicles in certain geographies.
It’s definitely going to happen. It’s just not happening right now.
AGI being possible (potentially even inevitable) doesn’t mean that AGI based on LLMs is possible, and it’s LLMs that investors have bet on. It’s been pretty obvious for a while that certain problems that LLMs have aren’t getting better as models get larger, so there are no grounds to expect that just making models larger is the answer to AGI. It’s pretty reasonable to extrapolate that to say LLM-based AGI is impossible, and that’s what the article’s discussing.
Reality doesn’t matter as long as line goes up.