• neclimdul@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 day ago

    I think you did a fine job right there explaining it without personifying it. You also captured the nuance without implying the machine could apply empathy, reasoning, or be held accountable the same way a human could.

    • fuzzzerd@programming.dev
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      There’s value in brevity and clarity, I took two paragraphs and the other was two words. I don’t like it either, but it does seem to be the way most people talk.

      • neclimdul@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        22 hours ago

        I assumed you would understand I meant the short part of your statement describing the LLM. Not your slight dig at me, your setting up the question, and your clarification on your perspective.

        So you be more clear, I meant “The IIm doesn’t consider a negative response to its actions due to its training and context being limited”

        In fact, what you said is not much different from the statement in question. And you could argue on top of being more brief, if you remove “top of mind” it’s actually more clear. Implying training and prompt context instead of the bot understanding and being mindful of the context it was operating in.