• Perspectivist@feddit.uk
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 day ago

    We’ll keep incrementally improving our technology, and unless we - or some outside force - destroy us first, we’ll get there eventually.

    We already know that general intelligence is possible, because humans are generally intelligent. There’s no reason to assume that what our brains do couldn’t be replicated artificially.

    At some point, unless something stops us, we’ll create an artificially intelligent system that’s as intelligent as we are. From that moment on, we’re no longer needed to improve it further - it will make a better version of itself, which will make an even better version, and so on. Eventually, we’ll find ourselves in the presence of something vastly more intelligent than us - and the idea of “outsmarting” it becomes completely incoherent. That’s an insanely dangerous place for humanity to end up in.

    We’re growing a tiger puppy. It’s still small and cute today but it’s only a matter of time untill it gets big and strong.

    • gbzm@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      1
      ·
      edit-2
      1 day ago

      What if human levels of intelligence requires building something that is so close in its mechanisms to a human brain that it’s indistinguishable from a brain, or a complete physical and chemical simulation of a brain? What if the input-output “training” required to make it work in any comprehensible way is so close in fullness and complexity to the human sensory perception system interacting with the world, that it ends up being indistinguishable from a human body or a complete physical simulation of a body, with its whole environment?

      There’s no reason to assume our brains or their mechanisms can’t be replicated artificially, but there’s also no reason to assume it can be made practical, or that because we can make it it can self-replicate at no cost in terms of material resources, or refine its own formula. Humans have human-level intelligence, and they’ve never successfully created anything as intelligent as them.

      I’m not saying it won’t happen, mind you, I’m just saying it’s not a certainty. Plenty of things are impossible, or sufficiently impractical that humans - or any species - may never create it.

      • thevoidzero@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        This is what I think might be more reasonable to do. Even with a very strong capabilities of reason, I think we might have to train the AGI like how we train children. It’ll take time as they interact through the environment not just read a bunch of data on the internet that comes from a various sources and might not lead into a coherent direction on how someone should live their life, or act.

        This way might make better AGI that are actually closer to human in variations on how they act compared to rapid training on the same data. Because having the diversity of thoughts and discussions are what leads into better outcomes in many situations.

      • m532@lemmygrad.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        This is like that “only planets that are 100% exactly like earth can create life, because the only life we know is on earth” backward reasoning