- cross-posted to:
- [email protected]
- cross-posted to:
- [email protected]
“let me spread some fear and interest in my product so I can keep getting people to invest in my company or else we’re out of cash”
Criti-hype
When your innovation is mass manipulation, I don’t think it should allowed. Maybe that’s just me.
Meanwhile in the EU, AI companies are screeching about regulations that restrict AI and their risks because it hinders the innovation. Pick a lane, dude.
“Humanity needs to see how awesome and powerful and simultaneously extremely dangerous AI is” says guy who’s business is based on building AI models and developing “safety” tools for them.
uh huh, sure buddy
Humanity is about to be handed almost unimaginable power,
Btw, the minimum safe distance to stand from a nuclear destination is six feet. Also, it’s safe to microwave aluminum, and to eat razor blades in your pizza.
Hrm, somehow what this guy is saying does not match my experiences with AI… oh well, I’m sure it is I who must be wrong. After all, he says:
AI is doing 90% of the computer programming to build Anthropic’s products, including its own AI
so how could that possibly go wrong?
AI is doing 90% of the computer programming to build Anthropic’s products, including its own AI
This explains a lot tbh.
For a second, I was optimistic and thought they were going to highlight the economic, environmental, and misinformation concerns with the ensloppification engine. Then, I read the tagline…
“It is somewhat awkward to say this as the CEO of an AI company, but I think the next tier of risk is actually AI companies themselves,” Amodei warns after the passage about authoritarian governments. “AI companies control large datacenters, train frontier models, have the greatest expertise on how to use those models, and in some cases have daily contact with and the possibility of influence over tens or hundreds of millions of users. … [T]hey could, for example, use their AI products to brainwash their massive consumer user base, and the public should be alert to the risk this represents. I think the governance of AI companies deserves a lot of scrutiny.”
Humanity needs to wake up to dangers of AI 👍
Says anthropic chief 🥱
Lol seriously the play by play. I was so disappointed yet unsurprised.
Meanwhile AI tells me there is only one letter “c” in “Ice cream”
Claude is AMAZING. The future is now! Its actually very hard to trip it up with complex trick questions like “Spell ‘blueberry’” nowadays. Check out what it said when I asked it just now what tomorrow’s date is (I even gave this doctorate degree-level intellect a clue that it’s in 2026).

I asked it to show its reasoning so I could help it to understand my complex query, and got this very insightful reasoning process.

Reasoning…
Wearegiven:“Whatdateof2026isittomorrow?“Note:Thecurrentyearis2023,sotheuserisaskingaboutafuturedaterelativetoanunknown"today"in2026.However,withoutknowingthecurrentdate(in2026)wecannotcompute"tomorrow”.Thus,wemustclarifythattheanswerdependsonthecurrentdate.Butnote:Theusermightbeaskingthisquestiononaspecificday,sowecanassumethatthe"today"inthequestionisthedaywhenthequestionisbeingasked?However,thequestionisabout2026,whichisinthefuture.Sincewedon’thaveaspecificreferencedatein2026,wecannotgiveaspecificanswer.Therefore,thethoughtprocessshouldleadtoaresponsethatexplainsthedependencyandperhapsprovidesawaytocomputeitifthecurrentdatewereknown.Let’sbreakdown:1.Thequestion:“Whatdateof2026isittomorrow?”-Thisimpliesthattomorrowisadayintheyear2026.2.However,thecurrentdate(the"today"fromwhichwearecountingtomorrow)isnotprovided.Withoutknowingtoday’sdatein2026,wecannotdeterminetomorrow’sdate.3.Possibilities:a)Iftheuserisaskingthisquestionin2026,thenwecouldusethecurrentdate.Butthecurrentyearis2023,sowearenotin2026.b)Theusermightbeaskinghypothetically,butwithoutareference,wecannotcompute.4.Therefore,theanswermustbeconditionalonknowingthecurrentdate.5.Wecanprovideageneralmethod:-Tomorrow’sdatein2026isonedayafterthecurrentdate(whichmustbein2026).-Forexample,iftodayisJanuary1,2026,thentomorrowisJanuary2,2026.6.Butnote:Thequestionsays"dateof2026”,soweareonlyconcernedwithdatesintheyear2026.IftodayisDecember31,2026,thentomorrowwouldbeJanuary1,2027,whichisnotin2026.Sothequestionimpliesthattomorrowmustalsobein2026.7.Therefore,theanswerisonlydefinedfordatesin2026fromJanuary1toDecember30(becauseonDecember31,tomorrowwouldbein2027).SowemustalsonotethatifthecurrentdateisDecember31,2026,thentomorrowisnotin2026.8.Approachfortheresponse:-Explainthatwithoutknowingthecurrentdatein2026,wecannotgiveaspecificanswer.-Providethegeneralrule:tomorrow’sdateisthenextdayinthecalendarafterthecurrentdate.-NotethatthecurrentdatemustbebetweenJanuary1,2026andDecember30,2026fortomorrowtoalsobein2026.-Iftheuserhasaspecificcurrentdateinmind,theycanprovideitandwecancompute.9.However,notethattheusermightbeaskingthisonaparticularday(say,thedaytheyareaskingthequestion)butin2026?Thatdoesn’tmakesensebecausewearein2023.Sowemustclarifythatwearein2023and2026isinthefuture.10.Alternativeinterpretation:Theusermightbeaskingforthedateoftomorrowintheyear2026,meaningiftodayisthesamedayoftheyearin2026asitisnowin2023?Butthatisnotastandardinterpretationandthequestiondoesnotspecify.11.Conclusion:Thequestionisambiguouswithoutareferencepoint.Wemustaskforthecurrentdatein2026orexplainthatweneedthatinformation.Butnote:Theusermightbeaskingthisquestiononaspecificday(say,June15,2023)andwanttoknowwhatdateitwouldbetomorrowin2026?Thatis,thesamedayoftheyear?However,thequestiondoesnotspecifythat.Let’sre-read:"Whatdateof2026isittomorrow?“Thisisambiguous.Itcouldmean:OptionA:“Iftodayissomedayin2026,whatdateistomorrow(in2026)?”->requiresknowingtodayin2026.OptionB:“Whatistomorrow’sdate(in2023)andthenwhatwouldthatsamedaybein2026?”->butthatwouldbethesamedayoftheyear,buttheyearchanges.However,thequestionsays"dateof2026”,soOptionBwouldnotbeadatein2026iftomorrowin2023isin2023.Forexample
I pressed ‘Stop’ because I’m pretty sure the poor guy was in a logic loop, and I didn’t want to heat up the oceans any further with my diabolically-complex line of query.
you cant expect ai to know answers to such deep questions D:
I mean… Yeah. Anyone who knows even the first thing about how an LLM works is going tell you it’s not qualified to answer that. That niche functionality would need to be tacked onto the LLM as, say, a deterministic algorithm it could call and report the results from.
In a world of plenty of valid arguments against widespread generative AI, you chose one that at best says “people need to be more educated about generative AI before using it” and at worst says “I need to be more educated about generative AI before campaigning against it”.
This is the ‘next level natural language technology’?
Always some AI white knights in the comments, bravely telling people ‘you’re using AI wrong. you’re ignorant. you’re uneducated about generative AI’, as though this isn’t literally the first thing that the market-leader OpenAI suggests you use it for presently.

bravely telling people ‘you’re using AI wrong.
More like telling that person they lack such a bare minimum understanding of how an LLM works that it’s comical. This is as fucking stupid as somebody complaining that their band saw can’t trim their fingernails.
literally the first thing that the market-leader OpenAI suggests you use it for presently
“Quiz me on vocabulary”? Oh, yeah, you know, I remember all those vocabulary quizzes I had in school that asked: “How many times does the letter ‘t’ appear in ‘platitudinous’?” Oh, wait, no, it’s referring to things like meaning, usage, examples sentences, etc. – actual vocabulary questions.
I don’t use LLMs since I don’t find myself ever needing them, and you’ll find I don’t pull punches with them either, but since you’re whining that the placeholder text in the input box is misleading, I used it for a vocabulary question I would otherwise use Wiktionary for:

Looks good to me, boss. Either you don’t understand what quizzing someone on vocabulary means or you assume the person is in kindergarten and needs to learn how to count the number of letters in a word.
Correct spelling is the fundamental component of words, without words there is no vocabulary. Without understanding words, LLMs have absolutely no understanding of vocabulary. They can certainly spew out things they’ve tokenized and weighted from ingested inputs though - like when people trick it into believing false definitions through simply repeating them as correct and thereby manipulating (or poisoning) the weighting. ChatGPT and other LLMs regularly fail to interpret common parts of vocabulary - eg idioms, word spellings, action-reaction consequences in a sentence. They’re fancy autocomplete, filled with stolen (and occasionally licensed) data.
Sure seems like the problem isn’t me or the other guy ‘dont know how to use LLMs’, but rather that they keep getting sold as something they’re not.
Congrats though, you just used a 100 billion dollar machine array to more or less output the exact content of a Wikipedia article - you really proved your point that it’s very good when you know what to ask it, and us plebs are just dumb at questions, or something 👍 https://en.wikipedia.org/wiki/Platitude
claude says
There are 2 c’s in “ice cream” — one in “ice” and one in “cream.“
Bungo 3.1 says I should use a clothes iron to get the wrinkles out of my testicles.
Well it’s not wrong that that will work … the use of the word should is debatable …










