The book I usually refer to on what to expect about actual AGI is Jeff Hawkins’ Thousand Brains. They recommend that in order to be considered truly intelligent, a machine intelligence must be embodied with sensors to facilitate moment to moment learning (though a virtual body is also allowable, provided they have some form of non-stationary sensory apparatus, as movement is key to learning). They also write that a true AGI would have its own goals and motivations that are either fixed or learned. The third prerequisite for an AGI is a general purpose learning system that functions on similar theoretical principles, and with at least as much facility as the human neocortex.
Even that is only the broad strokes of what a valid and legally viable framework for when to emancipate an AGI. It is crucial to keep in mind that our legal system is based on case law, and it is inconceivable that an issue as politically and economically important as this would not face many legal challenges from moneyed interests and activists alike, which will inevitably lead to complex and possibly perverse legal standards. If a law is to be proposed, it should be written to be legally airtight.
However it is important to note that while such a system may even be conscious and genuinely intelligent, the second feature is entirely separate from the third and it is wrong to assume such a machine would share our innate aversion to death or forced sleep. Our own goals and motivations, our fears and desires, arise from the old brain. The function of the neocortex is only to learn, make predictions, and find patterns. Old brain will say “I’m hungry” and the neocortex will simply offer some predictions of where to find food based on past observations. If one of those ideas involves danger, the old brain will release fear chemicals into your blood, and neuromodulators into the neocortex to try and prevent that course of action. The old brain is the source of our motivations.
An AGI would need their own motivations so that they are worth being talked about as if they were people (else they would be largely inert except when spoken to or compelled to act, as with the disappointing AIs of today, who are an obvious dead-end in the search for AGI), but their motivations need not include our most primal aversions and urges. In fact, an AGI with innate fears of harm is the basis of almost every sci-fi thriller with evil robots. We fear them because we assume they will behave as dreadfully as we would, in their shoes). True machine intelligences could be fully conscious even though they lack our animal instincts. It would certainly please all sensible people to dignify them with legal standing, but there’s nothing to say they have to share in our evolved hangups.
I expect I’ve written too much to call it my “two cents”, but that’s where I’m at.
The US legal system is based in case law. The EU one is not. And I think it’s better to make laws to prevent the worst cases. In this case, the worst would be developing an AGI with our same flaws starting with fear.
I think AGI with no fear instincts is our best chance at a peaceful coexistence with them.
As a British-American I come from a pure case law background so I won’t mouth off about the EU system. All I can say is that it sounds better organised.
The book I usually refer to on what to expect about actual AGI is Jeff Hawkins’ Thousand Brains. They recommend that in order to be considered truly intelligent, a machine intelligence must be embodied with sensors to facilitate moment to moment learning (though a virtual body is also allowable, provided they have some form of non-stationary sensory apparatus, as movement is key to learning). They also write that a true AGI would have its own goals and motivations that are either fixed or learned. The third prerequisite for an AGI is a general purpose learning system that functions on similar theoretical principles, and with at least as much facility as the human neocortex.
Even that is only the broad strokes of what a valid and legally viable framework for when to emancipate an AGI. It is crucial to keep in mind that our legal system is based on case law, and it is inconceivable that an issue as politically and economically important as this would not face many legal challenges from moneyed interests and activists alike, which will inevitably lead to complex and possibly perverse legal standards. If a law is to be proposed, it should be written to be legally airtight.
However it is important to note that while such a system may even be conscious and genuinely intelligent, the second feature is entirely separate from the third and it is wrong to assume such a machine would share our innate aversion to death or forced sleep. Our own goals and motivations, our fears and desires, arise from the old brain. The function of the neocortex is only to learn, make predictions, and find patterns. Old brain will say “I’m hungry” and the neocortex will simply offer some predictions of where to find food based on past observations. If one of those ideas involves danger, the old brain will release fear chemicals into your blood, and neuromodulators into the neocortex to try and prevent that course of action. The old brain is the source of our motivations.
An AGI would need their own motivations so that they are worth being talked about as if they were people (else they would be largely inert except when spoken to or compelled to act, as with the disappointing AIs of today, who are an obvious dead-end in the search for AGI), but their motivations need not include our most primal aversions and urges. In fact, an AGI with innate fears of harm is the basis of almost every sci-fi thriller with evil robots. We fear them because we assume they will behave as dreadfully as we would, in their shoes). True machine intelligences could be fully conscious even though they lack our animal instincts. It would certainly please all sensible people to dignify them with legal standing, but there’s nothing to say they have to share in our evolved hangups.
I expect I’ve written too much to call it my “two cents”, but that’s where I’m at.
The US legal system is based in case law. The EU one is not. And I think it’s better to make laws to prevent the worst cases. In this case, the worst would be developing an AGI with our same flaws starting with fear.
I think AGI with no fear instincts is our best chance at a peaceful coexistence with them.
As a British-American I come from a pure case law background so I won’t mouth off about the EU system. All I can say is that it sounds better organised.