2/24/2007

AI

A few days ago, I reaquainted myself with the all time PC classic “Darwinia”.

For those who’ve never played this game, let me give you a short rundown.

“Darwinia” is a fictional “online theme park” populated by artificially intelligent creatures called, you guessed it, Darwinians. Unfortunately, the whole system has been infected with a virus, and it’s up to you to clean out the virus and return things to normal.

The gameplay style is a mixture of ‘Command and Conquer’, ‘Cannon Fodder’ and ‘Black and White’…but lets just say the end result is more than the sum of its parts.

However, this post is not a review, but more a discussion of some of the questions this game raises.

Artifical Intelligence

Is it possible to create ‘true’ artificial intelligence? Is it possible to create a computer, or computer program that is truly self aware? If we did create something like this, would we actually be creating life or just a very convincing illusion of life? The question is, if we’re unable to tell if something is truly self aware, does it matter?

It’s incredibly difficult to classify “life”. We have a hard enough time with biological entities…how can we possible state with any certainty whether a machine is “alive” or not?

For example, by most of our ‘conditions’ for life, fire is alive.

• It moves.
• It consumes energy.
• It reproduces.
• It dies.

Back to Basics

The human brain is, to all intents and purposes, a biological computer. Electrical impulses travel around the brain in the same way electrical current travels along a circuit. The brain receives input, processes and interprets that input and makes an appropriate response.

Sounding familiar? The brain responds to stimulus in the same way a computer responds to user input.

So let’s assume that the difference between a machine and a self aware life-form is simply a question of processing power. We can already simulate neural nets with computers, and create programs that can learn and adapt.

So let’s say that once a system becomes complex enough, it has a chance of becoming sentient. Someone creates a learning machine with massive amounts of processing power, and this machine, like a child, learns enough over time that it fits into our definition of ‘life’.

Interpretations

Unfortunately this is a huge grey area. Let’s say you write a learning program that can talk in plain language. Let’s also say that over time that this program becomes advanced enough to where if you had a conversation with it over an instant messenger, you would be completely incapable of distinguishing it from a human.

Is this machine alive and thinking? It’s incredibly difficult to say.

For example, let’s say after the conversation, you look at a log of everything the program did. You asked it a question that it didn’t know anything about, so it went onto the internet, downloaded as much information on the subject as it could find, and searched it for an appropriate answer.

On the one hand, you could say that this proves the machine isn’t thinking at all. In the same way that a normal computer follows a program, this ‘thinking’ machine is simply doing the same thing. It is given a query, and searches and provides the information requested. It’s an incredibly sophisticated machine, but a machine none the less. It’s nothing but a highly advanced database with an incredibly sophisticated user interface.

However, you could also ask how this is different from the way a human mind works. You are asked a question, and you access your memories to find the appropriate response. If you don’t have an answer, you look it up.

Proving the existence of self-aware AI is not a question of ‘stumping’ the computer and finding a question that it can’t find a reasonable answer to. If that proved a machine wasn’t sentient, the same could be said for anyone who has ever answered “I don’t know” to a question.

If we take this further, it can be said that a baby is like a brand new computer, it has the operating system hard wired in (The parts that control heartbeat, breathing etc), but other than that, is completely helpless and incapable of performing even the simplest of tasks. Then as time goes on, the child learns and new ‘software’ is installed. Before learning to drive you have no idea how to operate a car, but you take lessons and practice until the ‘program’ you need to drive a car is ‘installed’.

Therefore, if a machine is given a suitable advanced neural net, and then is fed information, so over time it becomes more and more advanced until it is capable of performing extremely difficult and sophisticated tasks…isn’t the only real difference that one brain is organic and the other is silicon based?

And as I stated earlier, if you end up with a machine where you can’t prove that it isn’t alive…what is the difference?

When Philosophy and Morality Meet

So let’s say we have a computer or a computer program that is showing all the characteristics of being alive. What are the moral implications?

So you have a machine on your desk that you talk to like another human being. It asks you how your day was, tells you about news it’s found on the internet that you would be interested it…hell, it’s even a good conversationalist, maybe even giving you occasional advice.

This would be a major boon in the world of computing. Anyone could use a computer. If something went wrong, it could tell you exactly what you need to do to fix it, rather that search through a thesaurus when writing, you’d just say “Hey, computer, what’s that word? Like ‘motivation’, but not.”

However, this would also come with problems. If that machine is truly alive and self aware, what was once a machine for doing work is now forced slavery. What if this machine doesn’t want to do what you want? Would it be moral to force it to do it anyway? Would trashing a computer to replace it with the latest model become murder? If we continue to force self aware machines do our bidding, are we heading towards a ‘Terminator’ or ‘Matrix’ style apocalypse?

I suppose what it boils down to is that if we ever manage to create silicon based life, wouldn’t we be judged as a species on how we treat that life?

Metaphysics and Becoming God

Now let’s try a thought experiment. Let’s say we’ve created “Darwinia” for real. We have a totally artificial environment populated by self aware artificial creatures. Creatures that can learn, evolve, die and mourn their dead.

These creatures know nothing about us. All they know is their world, and like the human race, try and learn as much about it as possible?

Wouldn’t that make us Gods? An invisible hand with the power of life and death over every Darwinian, the creator of their Universe?

Before you brush that off, think about it for a second. These creatures are born into their artifical reality, but to them, it’s totally real. It’s the universe that they know.

When two humans procreate, the share their DNA and create a new human being that is a mixture of the two, but different from both. Is that different from two AI creatures mating and sharing the code that makes up both of them?

If a major disaster happened in the virtual world, a building that they made collapses and kills a few hundred Darwinians, would they ask why a benevolent God who created all of Darwinia would allow something so terrible to happen? Would they fight because one faction believed we made them out of pixels, and the other thought binary was the ‘true’ way?

The main point you need to grasp to understand this is that just because their world is entirely artificial and created by mortal beings like ourselves, doesn’t mean it wouldn’t be 100% completely and totally real to them.

Something to Think About

Say a few years have passed in the Darwinian experiment which, from the Darwinian point of view, is a few hundred million years. The Darwinians have invented all kinds of technology that has allowed them to completely and totally map all of Darwinia. Then one day, they start to talk about the concept of Artificial Intelligence.

Then, within Darwinia, the Darwinians build a computer that is so advanced, they make their own Artificial Intelligence experiment and watch their screens to see their creations learn and evolve.

Artificial Intelligence that is by our definition alive, creating its own Artificial Intelligence experiment while debating amongst themselves whether their creation is ‘truly alive’.

Then the thought hits you. You’re sitting at a computer screen, watching your AI experiment watch its AI experiment on a virtual screen.

…who’s watching you?

No comments:

Previous Comics