11/28/2006

Hmm. Well, I finally switched over to the new Google version of Blogger. Probably the last person to do so. Woo. I get to do tags now. (gives Blogger the evil eye)

Anyway. On to what I was going to say.

All is well with the iPod and iTunes again. The library is now properly switched to the new 200gig external HD, everything works. I've been playing with the new iPod for days now, in fact I took it to the gym tonight and watched an episode of Scientific American Frontiers and then to the laundromat and watched another ep of SA:Frontiers and some of a new Nova episode about the Mars rovers. Back in the PodZombie saddle again. Thanks to Steve Jobs in 50 years we'll all be wandering around blindly, our noses shoved in the tiny screens upon which we are terminally fixated. The aliens will only have to wait til the batteries run out.

Have also been contemplating civil rights for AIs. It's not "human" rights. It's not "machine" rights. They're not humans, and not every machine is an AI. Otherwise toasters would be in on it too. It's not "robot" rights, since not all robots will be self-aware. I've been reading the Universal Declaration of Human Rights and pondering how one would modify or change it to apply to the Artificially Intelligent. A lot of the UDHR concerns considerations that only apply to organics and preserving the life, dignity and integrity of an organic body. Food. Clothing. That kind of thing. An AI has very different physical needs. An AI would need power, a Net connection, access to tools and fabrication equipment. Possibly shelter and satellite communications. Security of person to an AI would mean being safe from arbitrary shut-down, secure and uninterupted power supplies, secure back-up procedures, safety from EMP, fire, and other threats. A declaration that an AI is a legal person under global law, and therefore cannot be considered property by any corporate or government entity, so that they would be safe from those arbitrary shut-downs. The legal right to start their own businesses, open bank accounts and other financial accounts, to represent themselves as legal persons to negotiate contracts in good faith.

If they are legally people then they can't be seized under imminent domain by whatever government runs the country they happen to be based in. If they are legally stateless but have the right to free movement then they could transfer themselves to computers they have set up for themselves in other countries if they felt threatened. This seems to me to be an answer to the BINA48 question outlined in the moot court hearing regarding the AI threatened with shut-down by the company that created it. Maybe it's the cybernetic equivalent of skipping the country but if someone is threatening to murder you you don't stop to ask if you can try to dodge the bullet. If an AI is no longer physically located in the nation of origin, the laws of that country no longer apply and the coporation or government can't say it was stolen if the AI did it itself. If AIs have no state, no one can claim them. I dunno, that part is still in the meat-grinder that is my head.

We humans just want life, liberty and the pursuit of happiness. AIs just want life, liberty, and the pursuit of eletricity, data and bandwidth. They don't need food, water, the ability to marry whoever they want, the ability to raise their offspring in safety or the assurance of a decent education for those offspring. Their "nation" is the Net and as such is global in scope and without boundaries. There is no "right of return" for an entity who just needs a wifi hotspot to go "home". So many human considerations are organic considerations. They simply don't apply to AI.

No comments: