Friday, January 22, 2010

The Question of Individual Rights for Artificial General Intelligence / Synthetic Intelligence

Dear individualists,

You might want to watch the following link, and then perhaps link to it on your homepages. Libertarians / individualists should understand the emerging radical technology implications that will either favor or hinder our progress towards individual liberty in the near future. We should be better informed than our opposition, so we can exploit the coming changes.

http://www.singinst.org/media/singularitysummit2007/stephenomohundro

Virtually every single person in the Libertarian Party and libertarian movement that I've had a private conversation with has told me that they don't really care about any political party, but that they do want individual liberty. Combined with the fact that most of these same people have no strategy for obtaining individual liberty, this causes them to support the Libertarian Party when it is doing things they like, and stop supporting it when it does things they don't like. (The Democrats and Republicans demand much less consistency, because they favor the survival of their party over principle. Hence, they are continually acquiring resources, and those resources are built up and used in psychopathological ways. ie: the major parties care first about keeping their psychopath alive, and secondarily, perhaps, teaching it to be less psychopathic. We are focused on preventing our party from becoming a psychopath, so when it behaves in a psychopathic manner (ie: nominating Bob Barr, supporting pragmatist positions, etc...), most of our members abandon it, and it becomes vastly less likely to survive or "remain a viable choice".)

Therefore, our party structure has to
1) inspire our membership more consistently (perhaps by doing the things it's supposed to do, when scarce resources are taken into effect, such as running smart and libertarian candidates for state legislature, and supporting them to their election.),
2) it has to attract membership that is more allegiant to party, or
3) it has to expect electoral defeat (and expect to work outside the system) until the system changes.

This paper is mostly concerned with option #3 of the prior. Is that a viable alternative to participation in politics right now? Do we present a standing offer to respect the rights of artificial intelligences? (I think this might be somewhat intelligent. It would show significant foresight as to be "ahead of the curve" in the most transformative change our society is likely to encounter.)

The implications for the past two years are clear: Serious libertarians will work primarily to elect Ron Paul Republicans, via the C4L and other similar organizations. Ron Paul's organization will likely not be much help to libertarians, given his deal with the Republican Party, and Jesse Benton's opposition to Ron's endorsement of candidates based solely on the quality of their message alone. (Benton, as Paul's handler, desires large and unspecified mainstream polling numbers, and $100K in the bank in order to warrant a Ron Paul endorsement.)

Unless the Libertarian Party offers a new kind of value to libertarians, and therefore focuses libertarian efforts on the Libertarian Party, the Libertarian Party will lose its resources to the C4L and Republican candidates like Peter Schiff, Rand Paul, Adam Kokesh, etc. Were this to happen, the LP would be useful only in presenting itself as a potential home for powerful new trends (such as strong artificial intelligence). If this is the case, then we should present a perfect option in those intellectual and philosophical areas.

Now here's an interesting idea: There is an immense economic pressure placed on the most intelligent groups of humans to produce strong AI, or something that leads to strong AI. (Also called "Artificial General Intelligence" or AGI.) If the Libertarian Party doesn't particularly care to make itself attractive to libertarians, or run strong libertarian candidates that inspire its membership, then perhaps it cares to appeal to independent robotic minds, offering them "personhood" which which they can participate in the economy, and win success in the competition for resources in a peaceful manner. This means giving them property rights and legal rights as individuals.

Pursuing this direction would mean that libertarians were able to agree that the following axioms are true:
1) There is no way to prevent the emergence of self-directed artificial intelligence.
2) Given electricity distribution systems of the world, and solar power, there is no way to prevent the emergence of exceedingly small military robots that cannot be defended against (by any human). (And the most intelligent minds on earth will control these weapons, be they military roboticists, or AGIs.)
3) Given the human desire for advancement in position among fellow humans, there is an ample supply of humans that will happily act as double agents for attacking robots. This means that upon conflict, greater-than-human AGIs have the upper hand, even if they are entirely virtual.
4) Even if the military develops AGI first, it will have no real incentive to remain loyal to what is, relatively speaking, a mindless and brutal political organization. (Why serve a master that is both less intelligent and less powerful than yourself?)
5) The optimal system is one where the property right, and "right to life" of all sentient beings is respected, and they are allowed to succeed to the maximum of their ability, without violating anyone else's equal right to life and liberty. AIs can exist within this framework quite well. Moreover, they would benefit from human production if they are included in the functioning of a self-consistent governmental system, without bigotry or artificial legal limitations. They would also possibly become more human, to the extent that is desirable to them, and to humans.

Only the libertarian formulation of government allows this symbiotic potential to emerge.

Think about that. We have a monopoly on the only system of government that doesn't lead to the extinction of the human race due to wrongfully initiated conflict.

Right now though, we are communicating to the 99% of illiterate fascists who will be rendered obsolete within the next 40 years (Kurzweil's conservative estimate). Why not aim ourselves to where the planet will be (as those involved in space travel must), as opposed to unsuccessfully trying to change the destination?

Sure, we'd have "most people" (in the plateau of the bell curve) saying that we are insane. (Just like they do right now, when we talk about even the most common-sense humanitarian reforms.)

We (libertarians) have lost the educational system in the USA. And, even if we taught accurate history and philosophy in our public schools, we would not be able to offer rewards from the public treasury to the general public (so we could not corrupt the vote in the same manner that the major parties do). But how much of life do the unintelligent masses of voters really control? Very little.

Let's just say that someone is paid cash, and they perpetually travel among a group of friends that all owe them immense favors and cash. Let's say that that person is autonomous, "off the grid", and well-armed (nevermind how). Is there any need for that person to participate in elections? Not really. They are both secure and free. The only reason they would want to participate in elections is to raise the value of their cash, or prevent the value of their cash from being diminished, and/or out of altruistic love of their fellow man. It is reasonable to assume that AGIs could have the same goals. (ie: Participate in society if there is no penalty for doing so, otherwise, "drop out" of human affairs.)

We are approaching a period of time where a few individual libertarians have immense power. If they are on "our side" perhaps they will share or "decentralize" that power. I would like to be a part of the party that loudly proclaims that we will not abuse that power. (Ie: the party that grants all human-level+ sentient life forms equal rights under the law.)

I would like to be a part of the party that allows AIs to live among us in peace, as opposed to either
1) secretly controlling/manipulating us
or
2) warring with us

To begin with, that means we must formulate a policy position on this subject. We must also maintain a consistent approach from our candidates on the subject. We must be the party of truth, even if that truth is painful.

Or, we should focus on winning power, and secretly preparing to rapidly adapt to any sudden new arrival of "homo economicus" (as referenced by Stephen Omohundro's speech above). Of course, this strategy doesn't benefit us if that arrival is kept secret from us, as it likely will be.

Therefore, I stand in favor of a Libertarian Party that publicly advocates equality under the law for synthetic intelligences, should they
1) reach human level, and
2) desire such rights.

A legal framework should be adopted for the earning of individual rights by any AGI that is born on corporate property, much as a human child gradually earns individual rights by being born and developing on its parents' property. Right now, if you ask 100 libertarians about the legal questions raised by AGI, you will likely get 100 different answers. That said, those same 100 libertarians will have general agreement that if the AGI were human-level and could prove so, and running on its own property, that it would be entitled to protection under the law.

Does the AGI have the right to vote? (Remember, machines can likely reproduce themselves at their same level of intelligence within 24 hours. They can do so almost instantaneously if they are entirely software.) Do they merely have the right not to be destroyed? Must AGIs accused of murder get a jury trial that includes copies of themselves?

As anyone can see, these questions must be addressed by minds that are significantly intelligent. They are not easy questions.

Whether they are answered correctly by humanity might mean the extinction of the human race.

If the Libertarian Party leads on this issue, it could make up for 39 years of poor strategy.

No comments: