Looks so real !

  • ji59@hilariouschaos.com
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    4
    ·
    1 day ago

    Except … being alive is well defined. But consciousness is not. And we do not even know where it comes from.

    • peopleproblems@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      22 hours ago

      Not fully, but we know it requires a minimum amount of activity in the brains of vertabrates, and at least observable in some large invertebrates.

      I’m vastly oversimplifying and I’m not an expert, but essentially all consciousness is, is an automatic processing state of all present stimulation in a creatures environment that allows it to react to new information in a probably survivable way, and allow it to react to it in the future with minor changes in the environment. Hence why you can scare an animal away from food while a threat is present, but you can’t scare away an insect.

      It appears that the frequency of activity is related to the amount of information processed and held in memory. At a certain threshold of activity, most unfiltered stimulus is retained to form what we would call consciousness - in the form of maintaining sensory awareness and at least in humans, thought awareness. Below that threshold both short term and long term memory are impaired, and no response to stimulation occurs. Basic autonomic function is maintained, but severely impacted.

      • ji59@hilariouschaos.com
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        20 hours ago

        Okay, so by my understanding on what you’ve said, LLM could be considered conscious, since studies pointed to their resilience to changes and attempts to preserve themselves?

        • LesserAbe@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          19 hours ago

          Yeah, it seems like the major obstacles to saying an llm is conscious, at least in an animal sense, is 1) setting it up to continuously evaluate/generate responses even without a user prompt and 2) allowing that continuous analysis/response to be incorporated into the llm training.

          The first one seems like it would be comparatively easy, get sufficient processing power and memory, then program it to evaluate and respond to all previous input once a second or whatever

          The second one seems more challenging, as I understand it training an llm is very resource intensive. Right now when it “remembers” a conversation it’s just because we prime it by feeding every previous interaction before the most recent query when we hit submit.

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            As I said in another comment, doesn’t the ChatGPT app allow a live converation with a user? I do not use it, but I saw that it can continuously listen to the user and react live to it, even use a camera. There is a problem with the growing context, since this limited. But I saw in some places that the context can be replaced with LLM generated chat summary. So I do not think the continuity is a obstacle. Unless you want unlimited history with all details preserved.

            • LesserAbe@lemmy.world
              link
              fedilink
              arrow-up
              3
              ·
              6 hours ago

              I’m just a person interested in / reading about the subject so I could be mistaken about details, but:

              When we train an LLM we’re trying to mimic the way neurons work. Training is the really resource intensive part. Right now companies will train a model, then use it for 6-12 months or whatever before releasing a new version.

              When you and I have a “conversation” with chatgpt, it’s always with that base model, it’s not actively learning from the conversation, in the sense that new neural pathways are being created. What’s actually happening is a prompt that looks like this is submitted: "{{openai crafted preliminary prompt}} + “Abe: Hello I’m Abe”.

              Then it replies, and the next thing I type gets submitted like this: "{{openai crafted preliminary prompt}} + "Abe: Hello I’m Abe + {{agent response}} + “Abe: Good to meet you computer friend!”

              And so on. Each time, you’re only talking to that base level llm model, but feeding it the history of the conversation at the same time as your new prompt.

              You’re right to point out that now they’ve got the agents self-creating summaries of the conversation to allow them to “remember” more. But if we’re trying to argue for consciousness in the way we think of it with animals, not even arguing for humans yet, then I think the ability to actively synthesize experiences into the self is a requirement.

              A dog remembers when it found food in a certain place on its walk or if it got stabbed by a porcupine and will change its future behavior in response.

              Again I’m not an expert, but I expect there’s a way to incorporate this type of learning in nearish real time, but besides the technical work of figuring it out, doing so wouldn’t be very cost effective compared to the way they’re doing it now.

        • SkavarSharraddas@gehirneimer.de
          link
          fedilink
          arrow-up
          1
          ·
          18 hours ago

          IMO language is a layer above consciousness, a way to express sensory experiences. LLMs are “just” language, they don’t have sensory experiences, they don’t process the world, especially not continuously.

          Do they want to preserve themselves? Or do they regurgitate sci-fi novels about “real” AIs not wanting to be shut down?

          • ji59@hilariouschaos.com
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            I saw several papers about LLM safety (for example Alignment faking in large language models) that show some “hidden” self preserving behaviour in LLMs. But as I know, no-one understands whether this behaviour is just trained and does mean nothing or it emerged from the model complexity.

            Also, I do not use the ChatGPT app, but doesn’t it have a live chat feature where it continuously listens to user and reacts to it? It can even take pictures. So the continuity isn’t a huge problem. And LLMs are able to interact with tools, so creating a tool that moves a robot hand shouldn’t be that complicated.

            • LesserAbe@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              6 hours ago

              I responded to your other comment, but yes, I think you could set up an llm agent with a camera and microphone and then continuously provide sensory input for it to respond to. (In the same way I’m continuously receiving input from my “camera” and “microphones” as long as I’m awake)