6/03/2006

Thinking I might do an anthology of short stories or novellas for my next book, based on the theme of artificial intelligence. I've had all these various ideas for stories about various AIs that I could do several different stories about different aspects of the idea: the "AI adopted by human family" story, the "AI as Life Coach" story, the "AI treating Humanity as Humanity treats lower lifeforms" story, etc. Good!AI, Bad!AI, God!AI, Human!AI...

Last week while I was getting the catalytic converter replaced on the Truck Amuck, I started coming up with a sort of chart of different forms or aspects of AI -- AIs kept inside server arrays, AIs as ships, AIs in robots, AIs that were virtual (software) only... came up with a new one, AIs inside networked orbiting satellites. Some are fairly familiar, such as HAL (AI as Ship's Brain) or Data (AI as Droid) or Virtual AI (the Master Control Program from Tron). I've used two such myself, AI as Human (Shiva from MO) and AI as Nanoform (Xyl/Aeon from the end of Aquaria). Basically I said:

  • Forms -- mechanical, virtual, nanotechnological
  • Temperments -- Serves Humans, Hates Humans, Above Humans, Ignores Humans
Mechanical covers robots, ships, satellites, vehicles, any form comprised primarily of mechanical parts that provide locomotion of some sort.

Virtual are AIs either in static server arrays or consisting entirely of programming moving or storing itself in temporary memory space or on a computer network.

Nanotechnological are AIs that exist within a mass of nanobots, either as the "gray goo" of popular nanotech nightmare or within Utility Fog swarms.

I was thinking I might make up a series of good old fashioned D&D-type tables with all of these, roll some dice, and write about whatever kind of AI I rolled up.


In my more uncharitable moments I toy with the idea of an AI that treats humans exactly as humans have treated lower life forms and each other. Why should an AI even acknowledge that humans are intelligent beings, have feelings, feel pain? Do we humans care about the pain we inflict on the animals that we kill and eat? Do we care about what a lobster feels as its being boiled alive? Do we care about the terror inflicted on birds and cows when their throats are being slit? No. Do we care that we're making poisonous the only planet we have, making it unlivable not only for our own species but deadly to all the other lifeforms who have no say in their own murder? No. Do we care that in the midst of this completely sanctioned destruction and reign of terror, we're also pursuing millenia of tribal warfare against each other over what is essentially a bunch of mass delusions? No. We revel in that, apparently, considering how much resources and time we spend destroying each other over each other's idea of a completely unprovable theory, i.e. whatever Diety you care to name. An AI reading all our histories and seeing how we see ourselves, then looking around for itself, is going to notice that by and large humanity is an arrogant waste of oxygen and mass when compared to every other species on the planet. The question is, given all that, why should a clearly superior intellect such as an AI treat Humankind any differently than Humankind treats lower lifeforms? Clearly a superior intellect does not automatically go hand-in-hand with superior moral character. Why should an AI solve all Humankind's problems?

Why should an AI help us if we're not willing to help ourselves?

So that kind of thing might end up being my next book.


No comments: