• MTK@lemmy.world
    link
    fedilink
    arrow-up
    1
    ·
    23 hours ago

    I would argue that it would not have it, at best it might mimic humans if it is trained on human data. kind of like if you asked an LLM if murder is wrong it would sound pretty convincing about it’s personal moral beliefs, but we know it’s just spewing out human beliefs without any real understanding of it.