• Electricd@lemmybefree.net
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    3
    ·
    1 day ago

    While I haven’t experienced it, I believe I kind of know what it can be like. Just a little something can trigger a reaction

    But I maintain that LLMs can’t be changed without huge tradeoffs. They’re not really intelligent, just predicting text based on weights and statistical data

    It should not be used for personal decisions as it will often try to agree with you, because that’s how the system works. Making looong discussions will also trick the system into ignoring it’s system prompts and safeguards. Those are issues all LLMs safe, just like prompt injection, due to their nature

    I do agree though that more prevention should be done, display more warnings