Published on April 27, 2007 By ReuelKB In Ethics
Let's start this blog off with an ethical question even though it may not apply today, it might later on. Should AIs of sufficient intelligence have individual rights?

Let's say you met someone and had an interesting conversation with that person. Nothing seemed unusual at all about the conversation, but you then later found out that said person was not an actual human, but an android. Essentially, this android is so well programmed, that the average person would be unable, simply by talking to it, figure out for certain that it is not a human. In essence, almost something like Andrew from Bicentennial man, or perhaps somewhat better example, the Cylons from BSG. Should this android be given rights essentially equivalent to a human being? If not, should it have any individual rights at all (such as the right to exist)? As far as any one can tell, this android can display creativity, emotional responses, etc.

If they are given rights, where would one draw the line? What if the android is not given human form, but rather another form, so that it is easily distinguished as an android? What if it didn't display emotion as we know it, but was still able to hold a rational conversation with you? And if it's running as a program on your PC (assuming the PC itself has more computational capability than a human brain)? Would it be ethical to delete the program? Is it even ethical to create such a program that could run on someone's PC?

I've been thinking alot about this recently. In my view, humans themselves are simply machines that have a very powerful and heavily interconnected biological computer controlling the hardware. Of course, many of you will have a different view on this matter. Because of this, I find it difficult at times to say that an AI of appropriate ability shouldn't have the same rights as us. I mean, let's take David from AI for example? If you were to own an Android like David, should it be allowed to destroy him? If you were to injure David, would David actually feel the pain similar to how we do, or would it be an unconscious simulation?

It seems to me that if you truly simulated the functionality of a human brain with such a high degree of detail, then an artificial consciousness would emerge. Should the entity with this artifcial consciousness be given rights? I would say so. But then you can get into the blurry fields, where what if home PCs got so powerful, that one could run an artificial consciousness as a program on a computer. Could such a thing exist? I mean, as far as I can figure, it could. Would it then be fine to simulate pain and agony on this artificial consciousness on one own's PC? I can't imagine such a program having it's own rights, after all, it's being run on someone's own computer.

Now, some will say that it's all irrelevent because we have souls, and such artificial beings do not. But that doesn't necessarily eliminate the problem entirely. Many believe that we have souls yet other animals do not. Yet, in their mind it's still not right to torture animals as well.

Now, such a situation certainly isn't an issue right at this moment, and might not even be within our life times. But we are progressing quite rapidly in the field of AI, and in the field of neurology. And one major limitation, that is computational power won't likely be an issue 30 years from now, when it is estimated by some that PCs will have more raw computational power than the average human brain.


Comments
on Apr 27, 2007
Instead of the Cylons, how about the Robots of Asimov like Daneel Olivaw?  An interesting question that I am glad I dont have to answer.