I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.
I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.
I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).
Leaving aside the questions whether it would benefit us, what makes you think LLM won’t bring about technical singularity? Because, you know, the word LLM doesn’t mean that much… It just means it’s a model, that is “large” (currently taken to mean many parameters), and is capable of processing languages.
Don’t you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won’t become AGI? Is it transformer? Is it any models that trained in the way we train llms today?
I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.
I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.
I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).
Leaving aside the questions whether it would benefit us, what makes you think LLM won’t bring about technical singularity? Because, you know, the word LLM doesn’t mean that much… It just means it’s a model, that is “large” (currently taken to mean many parameters), and is capable of processing languages.
Don’t you think whatever that will bring about the singularity, will at the very least understand human languages?
So can you clarify, what is it that you think won’t become AGI? Is it transformer? Is it any models that trained in the way we train llms today?