People always misuse searchengines by writing the whole questions as a search…
With ai they still can do that and get, i think in their optinion, a better result
They get an answer but unlike a search engine, the AI doesn’t show its work. I want a citation with the answer, I’m not taking your word for it!
Eh? You can ask it to provide sources and it will. Or at least Google AI in the search box does it by default
There’s lots of things wrong with AI, but that’s actually not one of them much of the time.
There is no guarantee those sources say what the answer says, or indeed that they actually exist. Generators can and do assemble words into phrases that look like citations, but those sources don’t exist. It’s actually a problem for librarians, who keep getting accused of hiding nonexistent books “cited” by ChatGPT
Oh interesting. It should do this by default then.
Defaults matter. They normalize patterns of behaviour. People who are normalized not to care about citations are being trained to blindly accept whatever they’re told. That’s a recipe for an unthinking, obedient, submissive society.
Congratulations, you’re now caught up on the last two decades
Oh this has been going on for centuries. Technology is always changing and so is culture! I think it’s usually the case that technology changes first and culture takes a while to catch up.
It used to be funny when someone wrote a two sentence long “search query” on google. Nowadays, you can literally do that on any LLM and you’ll get a summary based on a few results. There are a whole bunch of problems with that, but I’ll just let the people from [email protected] to elaborate.
Anyway, I gave this query to DDG: “I just bought a bag of carrots and I don’t know what to do with them. Should I make soup or something? What are the other ingredients I would need for that?”
and got this response:
“You can make a simple carrot soup with just a few ingredients. You’ll need carrots, onions, garlic, broth, and cream or coconut milk. Some recipes also include butter, olive oil, and spices like curry paste or ginger for extra flavor.”
Gotta say, that wasn’t too bad. I didn’t need to open a single cooking blog to figure out what I need.
You already told it you were interested in soup. It didn’t provide cook times, prep work needed or portions. It didn’t mention any other alternatives or possibilities.
You will need to open a recipe blog anyway, after taking the time to read that and determine that it’s not everything you need to know, and it drank the volume of a Honda Civic in water and used enough electricity to heat your house for resistive space heaters for 17 hours in below-zero F weather.
It created that answer by comparing its statistical word tree to other, similar word combinations and then autocompleting the next most likely word you might want to hear. It did not consider your topic in any way, it doesn’t know what’s carrot is, only its token number and that it kind of belong in paragraphs that roughly resemble the one it gave you. It is a reverse-Gaussian-blur of a Gaussian-blurred overlay of a million photos of paragraphs about carrots, soups, and carrot soups.
It carved away forests and poisoned nearby pensioner’s air just to give your this gray area of an answer, devoid of all thought or creativity. It is objectively worse than the ad-strewn sites written by an actual person, in every way, and you’d have to be a fucking madman to offer any praise upon it.
People that use LLMs as search engines run the very high risk of “learning” misinformation. LLMs excel at being “confidently incorrect”. Not always, but also not seldomly, LLMs slip bits of information into a result that is false. That confident packaging, along with the fact that the misinformation is likely surrounded by actual facts, often convinces people that everything the LLM returned is correct.
Don’t use LLM as your sole source of information or as a complete replacement for search.
EDIT: Treat LLM results as gossip or as a rumor.
Just had a discussion with an LLM about the plot of a particular movie, particularly the parts where the plot falls short. I asked it to list all the parts that feel contrived.
It gave me 7 points that were ok, but the 8th one was 100% hallucinated. That event is not in this movie at all. It totally missed the 5 completely obivous contrived screw-ups in the ending of the movie too, so I was not very convinced of this plot analysis.
That’s my main issue with llms. If I need to fact check the information, I’d save time by directly looking for the information elsewhere. It makes no sense to me.
I tend to think that people use AI (and yeah, search engines too) the way children use their parents:
“Mom, why is the sky blue?” “Mom, where is China?” “Mom, can you help me with this school project?” (The mother ends up doing everything).
The thing is, unlike a parent, AI is unable to tell users that it doesn’t know everything and that users should do things on their own. Because that would reduce the number of users.
The thing is, unlike a parent, AI is unable to tell users that it doesn’t know everything and that users should do things on their own.
The world would be a better place if most parents did that ibstead of confidently spewing bigotry, misogyny, and other terrible opinions. I only knew of a few that were able to say ‘I don’t know’ as a kid, and the ratio is about the same with adults.
Blame the Dunning-Kruger effect. The people I have seen most likely to acknowledge their lack of knowledge in a certain area have been those who are very wise and well-versed in at least one field, such as science, History (like my mom), art, etc.
Mediocre people are mostly convinced that they know everything.
Mom, why is China?
AI has a lot more surface knowledge about a lot more things than my parents ever did. I think one of the more insidious things about AI though, is that will a human you can generally tell when they are out of their depth. They grasp for words. Their speech cadence is more hesitant. Their hesitation is palpable. (I think palpable might be considered slop these days, but fuck haters it’s how I write — emdashes and all.)
AI never gives you that hint. It’s like an autistic encyclopedia. “You want to know about the sun? I read just the book. Turns out there’s a god who pulls it across the sky every day.” And then it proceeds to gaslight you when you ask probing questions.
(It has gotten better about this due to the advanced meta prompting behind the scenes and other improvements, but the guardrails are leaky.)
Maybe AI should be more like a parent and simply say “I don’t know. Go read a book, find out, and let me know”.
Pretty sure my mom did know the answer but I learned more by reading a book and telling her what I learned.
Me too! Nothing helped me think for myself more than my mother yelling at me, “I don’t know! The encyclopedia is right there! Go read it and let me cook, for God’s sake!”
LLM can be used as a search engine for things you know absolutely zero terminology about. That’s convenient. You can’t ask Google for “tiny striped barrels with wires” and expect to get the explanation of resistors marking.
10-15 years ago Google returned the correct answers when I used the wrong words. For example, it would have most likely returned resistors for that query because of the stripes, and if you left off stripes it would have been capacitors.
AI isn’t nearly as good as Google was 10+ years ago.
There’s the theory it’s by design. They have made search so bad so that we now turn to Ai to give us what search can, and by that they can effectively charge you for searching…which generally we would baloney the idea of paying to search.
Reverse image search would let you find that answer more accurately than some llm
How? And don’t those image searches have LLMs under the hood?
When you see something you have no idea what it is, you just take a photo and do the reverse search, finding other similar photos and the name of the thing. You don’t even need to spend time describing what you see and won’t have a chance of getting a wrong confident answer. Reverse image search exists for more than a decade and don’t use llms
ML is ML. No matter if it is LLM or not. And the question “What is this thing?” covers a negligibly tiny percent of search requests.
It worked yesterday trying to find a video by describing the video and what I remembered from the thumbnail. That was great. I want that for my own photoa and videos without having to upload them somewhere.
It sounds like you might be referring to miniature striped barrels used in crafts or model-making, often decorated or with wire elements for embellishment or functionality. These barrels can be used in various DIY projects, including model railroads, dioramas, or even as decorative items.
I’ve unfortunately noticed that as llms have gotten more traction that search engines in my experience have gotten worse. Sometimes I have to do like 2 or 3 searches to get the exact right articles that actual relate to what I’m looking for. In the contrary llms are great for asking a question directly, and figuring out exactly what you’re looking for and then going to a search engine and doing some research on your own. It would be nice if there was a way to somehow combine the two without the ridiculously egregious environmental and intellectual issues with llms.
Is that not what Google does now? They give you a little AI summary with information taken from the first few results and break it down into a more easily digestible version.
I guess? I only use Google at work though so not too familiar. But still hits my issues with llms, also it’s forced in Google I believe.
Some people like AI because they treat it as if it’s the voice of God speaking directly to them.
an llm is little more than a search engine
Yeah that’s what I use it for mostly. On DDG I’ll ask it stuff like someones age, or when did someone pass etc, to get a quick description of something. And if I need more info I’ll look it up on my own.










