• Wlm@lemmy.zip
    link
    fedilink
    English
    arrow-up
    5
    ·
    3 hours ago

    Like a year ago adding “and don’t be racist” actually made the output less racist 🤷.

    • NιƙƙιDιɱҽʂ@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      That’s more of a tone thing, which is something AI is capable of modifying. Hallucination is more of a foundational issue baked directly into how these models are designed and trained and not something you can just tell it not to do.

      • Wlm@lemmy.zip
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Yeah totally. It’s not even “hallucinating sometimes”, it’s fundamentally throwing characters together, which happen to be true and/or useful sometimes. Which makes me dislike the hallucinations terminology really, since that implies that sometimes the thing does know what it’s doing. Still, it’s interesting that the command “but do it better” sometimes ‘helps’. E.g. “now fix a bug in your output” probably occasionally’ll work. “Don’t lie” is not going to fly ever though with LLMs (afaik).

      • Flic@mstdn.social
        link
        fedilink
        arrow-up
        3
        ·
        3 hours ago

        @NikkiDimes @Wlm racism is about far more than tone. If you’ve trained your AI - or any kind of machine - on racist data then it will be racist. Camera viewfinders that only track white faces because they don’t recognise black ones. Soap dispensers that only dispense for white hands. Diagnosis tools that only recognise rashes on white skin.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          33 minutes ago

          Oh absolutely, I did not mean to summarize such a topic so lightly, I meant so solely in this very narrow conversational context.