During a recent discussion with Tristan, the subject of the Turing Test arose. For those who are unfamiliar, the test is intended as a way to determine if a machine has intelligence. You set it up so that it can converse with a human being – for instance, through a text-based instant message type conversation – and if the person thinks they are talking with another human, it can be taken as evidence that the machine is intelligent.
Setting aside the question of how good an intelligence test this really is (a computer could pretty easily trawl a database of human conversations to produce convincing conversation), it seems like there is another sort of test that would be demonstrative of a different kind of intelligence. Namely, it would be when a machine or a computer program first becomes aware of itself as being a machine or computer program.
It is possible that no machine made by humans will ever develop that level of self awareness. Perhaps it is impossible to replicate whatever trick our brains use to turn flesh into consciousness. If it did happen, however, it seems like it could help to illuminate what self-understanding means, and what sort of mechanisms it requires.
How would you know that the machine is aware that it is a machine?
And if the machine thought it was a human being, couldn’t it still meet every definition of intelligence?
It is hard to know, because a machine could easily parrot the right sort of statements without understanding them. I suppose one demonstration would be the generation of novel ideas about the significance of being a machine or a program.
A machine or program that thought it was human would have to be both intelligent and insane, I suppose, like a human being who thinks they are a robot. Quite possibly, any entity that is capable of real intelligence is also capable of suffering from delusions.
So to determine that the machine knows it is a machine, it seems like you’re back to something like the Turing test.
In a technical sense, it is impossible to determine for sure whether any entity – biological or artificial – is actually self aware, or just capable of putting on a convincing show of it.
Still, I think it may be more interesting to have a machine that is aware of its nature as a machine and able to reflect on the implications of that condition than it would be to have a machine that can successfully imitate a human being. It’s a bit of an introvert/extrovert distinction, perhaps.
Lie Like A Lady: The Profoundly Weird, Gender-Specific Roots Of The Turing Test