• Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      2
      ·
      edit-2
      1 day ago

      AGI doesn’t need to be consciouss to fit the criteria and even if it was we’d have no way of knowing other than what it tells us. AGI simply means it’s an generally intelligent artificial system. So in other words human level (or above) intelligence but without biological wetware.

      The term AGI was first used in 1997 by Mark Avrum Gubrud in an article named ‘Nanotechnology and international security’

      By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be “conscious” or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

      • Brave Little Hitachi Wand@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        19 hours ago

        The book I usually refer to on what to expect about actual AGI is Jeff Hawkins’ Thousand Brains. They recommend that in order to be considered truly intelligent, a machine intelligence must be embodied with sensors to facilitate moment to moment learning (though a virtual body is also allowable, provided they have some form of non-stationary sensory apparatus, as movement is key to learning). They also write that a true AGI would have its own goals and motivations that are either fixed or learned. The third prerequisite for an AGI is a general purpose learning system that functions on similar theoretical principles, and with at least as much facility as the human neocortex.

        Even that is only the broad strokes of what a valid and legally viable framework for when to emancipate an AGI. It is crucial to keep in mind that our legal system is based on case law, and it is inconceivable that an issue as politically and economically important as this would not face many legal challenges from moneyed interests and activists alike, which will inevitably lead to complex and possibly perverse legal standards. If a law is to be proposed, it should be written to be legally airtight.

        However it is important to note that while such a system may even be conscious and genuinely intelligent, the second feature is entirely separate from the third and it is wrong to assume such a machine would share our innate aversion to death or forced sleep. Our own goals and motivations, our fears and desires, arise from the old brain. The function of the neocortex is only to learn, make predictions, and find patterns. Old brain will say “I’m hungry” and the neocortex will simply offer some predictions of where to find food based on past observations. If one of those ideas involves danger, the old brain will release fear chemicals into your blood, and neuromodulators into the neocortex to try and prevent that course of action. The old brain is the source of our motivations.

        An AGI would need their own motivations so that they are worth being talked about as if they were people (else they would be largely inert except when spoken to or compelled to act, as with the disappointing AIs of today, who are an obvious dead-end in the search for AGI), but their motivations need not include our most primal aversions and urges. In fact, an AGI with innate fears of harm is the basis of almost every sci-fi thriller with evil robots. We fear them because we assume they will behave as dreadfully as we would, in their shoes). True machine intelligences could be fully conscious even though they lack our animal instincts. It would certainly please all sensible people to dignify them with legal standing, but there’s nothing to say they have to share in our evolved hangups.

        I expect I’ve written too much to call it my “two cents”, but that’s where I’m at.

        • bufalo1973@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          22 hours ago

          The US legal system is based in case law. The EU one is not. And I think it’s better to make laws to prevent the worst cases. In this case, the worst would be developing an AGI with our same flaws starting with fear.

          • Brave Little Hitachi Wand@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            22 hours ago

            I think AGI with no fear instincts is our best chance at a peaceful coexistence with them.

            As a British-American I come from a pure case law background so I won’t mouth off about the EU system. All I can say is that it sounds better organised.