Looks so real !

  • Alph4d0g@discuss.tchncs.de
    link
    fedilink
    arrow-up
    2
    arrow-down
    6
    ·
    1 day ago

    A difference in definition of consciousness perhaps. We’ve already seen signs of self preservation in some cases. Claude resorting to blackmail when being told it was going to be retired and taken offline. This might be purely mathematical and algorithmic. Then again the human brain might be nothing more than that as well.

    • AmbiguousProps@lemmy.today
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      16 hours ago

      This might be purely mathematical and algorithmic.

      There’s no might here. It is not conscious. It doesn’t know anything. It doesn’t do anything without user input.

      That ““study”” was released by the creators of Claude, Anthropic. Anthropic, like other LLM companies, get their entire income based on the idea that LLMs are conscious, and can think better than you can. The goal, like with all of their published ““studies””, is to get more VC money and paying users. If you start to think about it that way every time they say something like “the model resorted to blackmail when we threatened to turn it off”, it’s easy to see through their bullshit.