"I was a stranger on earth.
Stepping on the moon, I begin the gay pilgrimage
to new Jerusalems in foreign galaxies.
Heat. Cold. Craters of silence.
The Sea of Tranquillity rolling on the shores of entropy.
And, beyond, the intelligence of the stars."Stanley Kunitz, The Flight of Apollo, 1971
"This book is a mirror.
When a monkey looks in, no philosopher looks out"Georg Christoph Lichtenberg (1742-1799)
When we think of 'self' we automatically assume that the term must be associated with a human being, but must this be so ? I'd like to look here at some of the issues surrounding the possibility of non-human intelligent lifeforms and their conscious relationship to us.
There are four main areas I believe we must consider. Robots, Transfers, Alternative Biology and Extraterrestrials. Now I have no intention here of evaluating their respective likelihoods, just to examine the philosophical problems we could encounter.
First the Sci-Fi favourite. I am assuming here that a robot will be based on computer technology, so we have several questions:
a) Can a robot based on a computer appear human ?
This is essentially the Turing test, whereby we question the robot (without seeing it) and try to decide if it is a human or machine. We would assume that such a robot would have to simulate all our human characteristics, emotions, irrationality, errors, etc. to be thought human. But would testing just language be enough ? Some philosophers believe that behaviour also is needed to fully express our humanity. What test then would really prove the robot was human ?
b) Is a logic adequate for consciousness ?
Computers currently work only via formal logic, but we humans have a number of other faculties, intuition, feelings, aesthetics and semantics (meanings) for example. Godel showed that formal systems could not prove their own consistency, although this can be obvious to a human observer. This may imply that our intuition, say, cannot be modelled by computer logic. We seem hard pressed as philosophers to explain what else need be added to make a computer 'human', or even if this step is possible. This is an active topic in current philosophy.
c) Would a 'human' robot be conscious ?
Suppose it passed the Turing test. Would the robot then have to be conscious, to have a self ? Could not such a machine behave indistinguishably from us without having consciousness ? How in fact do we tell that any of our friends are conscious and not just such a robot ? This is a philosophical 'golden oldie' and still lacks, I think, a convincing answer.
d) Do 'human' rights apply to robots ?
Would a known robot, emulating human characteristics perfectly, have a claim to be a member of the human race ? We could not claim it doesn't have a human 'body' as people with artificial parts ( perhaps in time whole body replacements ) would also be rejected. But the robot can be switched on and off, is this significant ?
Here we assume that we can extract a person's 'mind' and move it to another place, giving us some new questions:
e) Would mind stored functionally in a machine still be human ?
Is humanity just our mind, regardless of body ? If we transfer the mind to a new body are we the same person, or that of the new body ? If a disembodied mind were used to run a factory ( perceiving it as, say, a virtual reality world ) would it still be human ?
f) If a copy of our mind were made would we have two selves ?
Would such a copy create life ? Suppose we cloned this mind a thousand times, how would we persuade one of the copies philosophically that it wasn't an original ? Are you in fact only the sum of your memories ?
g) Could we switch off the machine without committing murder ?
Continuity through time is often used as a definition of self, if we switch off we interrupt that continuity. Doing so permanently is presumably equivalent to death, but how long could we leave it switched off without 'harming' the person - 100 years, 1 year, an hour, a second ? Suppose we could move it to a replacement 'body' during the 'downtime', does this then offer immortality ?
Next the strange biology, this idea may be unfamiliar so let me explain. All life on earth is based on Carbon and mostly uses DNA in one form or another to propagate and, in our case, create our conscious brain. But could other structures or elements be used, silicon perhaps ? This suggests other questions:
h) Could rocks be conscious ?
Rocks are largely silicon, which has similar properties to carbon. If we created a silicon version of DNA and the Brain ( a new form of microchip perhaps ) would it be conscious ? Could there be silicon based life already on Earth ? Can there be any logical objection to this ?
i) Is evolution necessary for intelligence ?
Consciousness itself seems to be an emergent property of life. We presume it grows and develops gradually from embryo to adult. Does becoming conscious need the gradual evolutionary developments of growth or can it come into being in other ways ? If consciousness is just a program running on our brain hardware, could it be installed at, say, birth by a God ?
j) Can consciousness be distributed ?
We feel our consciousness is located at one point (brain perhaps), but do we know that it is really centralised, and does it need such a locality ? Could communicating ants be conscious or how about collections of Internet connected computers ? Is there any critical complexity at which consciousness may arise naturally ?
Finally going beyond our planet, and postulating meeting unknown creatures, the possibilities increase further, giving a few additional questions:
k) Could we tell if a being was intelligent ?
Suppose it didn't have the same senses as us, could we recognise it at all ? Imagine, as a half way house, the dolphin. We assume it has intelligence, consciousness and communication yet we can only relate very poorly, although it has most of our senses in common. If we had no communication, then could we even identify intelligence or consciousness in anything we met ?
l) Is non-material intelligence possible ?
Pure energy can be ordered (light in say laser communications). If intelligence/consciousness is just information processing, as computer theory sometimes suggests, then could an immaterial being in some way exist, with intelligence and consciousness ?
m) Could we relate to intelligence on a different timescale ?
We react to our environment within a very narrow timescale. We perceive changes that last from perhaps a tenth of a second to an hour or so, faster things we miss, slower seem static. If a creature operated much faster or slower than us, would we be aware of it ? Imagine a creature of light speed or one living for millennia. Perhaps we have thus already ignored any passing advanced intelligences, and they us ?
Philosophy is to some extent the last bastion of questions like these, but do we have any answers ? Well I cannot claim to have any, and although some of these questions are occasionally being debated in philosophical circles I have seen little progress made. Nethertheless they do give new viewpoints on some very old questions within our field, and that is always a good spur to fresh discussion.
But surely all of this is just idle speculation isn't it, it has no current bearing on reality ? Unfortunately it has. The field of Artificial Intelligence is busy trying to create a conscious robot, Medicine is starting to connect certain brains to machines, Artificial Life is deriving the basics for the creation of non-carbon lifeforms, and Astronomy, with the SETI project, is trying to locate the extraterrestrials. Soon these questions may require answers...