Claude is AMAZING. The future is now! Its actually very hard to trip it up with complex trick questions like “Spell ‘blueberry’” nowadays. Check out what it said when I asked it just now what tomorrow’s date is (I even gave this doctorate degree-level intellect a clue that it’s in 2026).
I asked it to show its reasoning so I could help it to understand my complex query, and got this very insightful reasoning process.
I pressed ‘Stop’ because I’m pretty sure the poor guy was in a logic loop, and I didn’t want to heat up the oceans any further with my diabolically-complex line of query.
I mean… Yeah. Anyone who knows even the first thing about how an LLM works is going tell you it’s not qualified to answer that. That niche functionality would need to be tacked onto the LLM as, say, a deterministic algorithm it could call and report the results from.
In a world of plenty of valid arguments against widespread generative AI, you chose one that at best says “people need to be more educated about generative AI before using it” and at worst says “I need to be more educated about generative AI before campaigning against it”.
This is the ‘next level natural language technology’?
Always some AI white knights in the comments, bravely telling people ‘you’re using AI wrong. you’re ignorant. you’re uneducated about generative AI’, as though this isn’t literally the first thing that the market-leader OpenAI suggests you use it for presently.
More like telling that person they lack such a bare minimum understanding of how an LLM works that it’s comical. This is as fucking stupid as somebody complaining that their band saw can’t trim their fingernails.
literally the first thing that the market-leader OpenAI suggests you use it for presently
“Quiz me on vocabulary”? Oh, yeah, you know, I remember all those vocabulary quizzes I had in school that asked: “How many times does the letter ‘t’ appear in ‘platitudinous’?” Oh, wait, no, it’s referring to things like meaning, usage, examples sentences, etc. – actual vocabulary questions.
I don’t use LLMs since I don’t find myself ever needing them, and you’ll find I don’t pull punches with them either, but since you’re whining that the placeholder text in the input box is misleading, I used it for a vocabulary question I would otherwise use Wiktionary for:
Looks good to me, boss. Either you don’t understand what quizzing someone on vocabulary means or you assume the person is in kindergarten and needs to learn how to count the number of letters in a word.
Correct spelling is the fundamental component of words, without words there is no vocabulary. Without understanding words, LLMs have absolutely no understanding of vocabulary. They can certainly spew out things they’ve tokenized and weighted from ingested inputs though - like when people trick it into believing false definitions through simply repeating them as correct and thereby manipulating (or poisoning) the weighting. ChatGPT and other LLMs regularly fail to interpret common parts of vocabulary - eg idioms, word spellings, action-reaction consequences in a sentence. They’re fancy autocomplete, filled with stolen (and occasionally licensed) data.
Sure seems like the problem isn’t me or the other guy ‘dont know how to use LLMs’, but rather that they keep getting sold as something they’re not.
Congrats though, you just used a 100 billion dollar machine array to more or less output the exact content of a Wikipedia article - you really proved your point that it’s very good when you know what to ask it, and us plebs are just dumb at questions, or something 👍
https://en.wikipedia.org/wiki/Platitude
Meanwhile AI tells me there is only one letter “c” in “Ice cream”
Claude is AMAZING. The future is now! Its actually very hard to trip it up with complex trick questions like “Spell ‘blueberry’” nowadays. Check out what it said when I asked it just now what tomorrow’s date is (I even gave this doctorate degree-level intellect a clue that it’s in 2026).
I asked it to show its reasoning so I could help it to understand my complex query, and got this very insightful reasoning process.
I pressed ‘Stop’ because I’m pretty sure the poor guy was in a logic loop, and I didn’t want to heat up the oceans any further with my diabolically-complex line of query.
you cant expect ai to know answers to such deep questions D:
I mean… Yeah. Anyone who knows even the first thing about how an LLM works is going tell you it’s not qualified to answer that. That niche functionality would need to be tacked onto the LLM as, say, a deterministic algorithm it could call and report the results from.
In a world of plenty of valid arguments against widespread generative AI, you chose one that at best says “people need to be more educated about generative AI before using it” and at worst says “I need to be more educated about generative AI before campaigning against it”.
This is the ‘next level natural language technology’?
Always some AI white knights in the comments, bravely telling people ‘you’re using AI wrong. you’re ignorant. you’re uneducated about generative AI’, as though this isn’t literally the first thing that the market-leader OpenAI suggests you use it for presently.
More like telling that person they lack such a bare minimum understanding of how an LLM works that it’s comical. This is as fucking stupid as somebody complaining that their band saw can’t trim their fingernails.
“Quiz me on vocabulary”? Oh, yeah, you know, I remember all those vocabulary quizzes I had in school that asked: “How many times does the letter ‘t’ appear in ‘platitudinous’?” Oh, wait, no, it’s referring to things like meaning, usage, examples sentences, etc. – actual vocabulary questions.
I don’t use LLMs since I don’t find myself ever needing them, and you’ll find I don’t pull punches with them either, but since you’re whining that the placeholder text in the input box is misleading, I used it for a vocabulary question I would otherwise use Wiktionary for:
Looks good to me, boss. Either you don’t understand what quizzing someone on vocabulary means or you assume the person is in kindergarten and needs to learn how to count the number of letters in a word.
Correct spelling is the fundamental component of words, without words there is no vocabulary. Without understanding words, LLMs have absolutely no understanding of vocabulary. They can certainly spew out things they’ve tokenized and weighted from ingested inputs though - like when people trick it into believing false definitions through simply repeating them as correct and thereby manipulating (or poisoning) the weighting. ChatGPT and other LLMs regularly fail to interpret common parts of vocabulary - eg idioms, word spellings, action-reaction consequences in a sentence. They’re fancy autocomplete, filled with stolen (and occasionally licensed) data.
Sure seems like the problem isn’t me or the other guy ‘dont know how to use LLMs’, but rather that they keep getting sold as something they’re not.
Congrats though, you just used a 100 billion dollar machine array to more or less output the exact content of a Wikipedia article - you really proved your point that it’s very good when you know what to ask it, and us plebs are just dumb at questions, or something 👍 https://en.wikipedia.org/wiki/Platitude
claude says
There are 2 c’s in “ice cream” — one in “ice” and one in “cream.“
Bungo 3.1 says I should use a clothes iron to get the wrinkles out of my testicles.
Well it’s not wrong that that will work … the use of the word should is debatable …