People always misuse searchengines by writing the whole questions as a search…

With ai they still can do that and get, i think in their optinion, a better result

  • Jo Miran@lemmy.ml
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    2 hours ago

    People that use LLMs as search engines run the very high risk of “learning” misinformation. LLMs excel at being “confidently incorrect”. Not always, but also not seldomly, LLMs slip bits of information into a result that is false. That confident packaging, along with the fact that the misinformation is likely surrounded by actual facts, often convinces people that everything the LLM returned is correct.

    Don’t use LLM as your sole source of information or as a complete replacement for search.

    EDIT: Treat LLM results as gossip or as a rumor.

    • TranquilTurbulence@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 hour ago

      Just had a discussion with an LLM about the plot of a particular movie, particularly the parts where the plot falls short. I asked it to list all the parts that feel contrived.

      It gave me 7 points that were ok, but the 8th one was 100% hallucinated. That event is not in this movie at all. It totally missed the 5 completely obivous contrived screw-ups in the ending of the movie too, so I was not very convinced of this plot analysis.

    • morto@piefed.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 hours ago

      That’s my main issue with llms. If I need to fact check the information, I’d save time by directly looking for the information elsewhere. It makes no sense to me.