Introduction to Artificial Intelligence
Finally, we should like to return to our original question:
We have debated both the successes and deficiencies of these network models and in the light of what we have discovered we must now return and try to answer the question. We first discuss how we might recognize intelligence in a machine and to what extent neural networks and other approaches to AI can be said to have met this criteria. Finally, we conclude by describing current research directions.
Consciousness, Nets and the FutureWhat do we mean when we say someone is intelligent? Is it that they are, for example, very good at mathematics or translating foreign languages? These people are certainly good at understanding and manipulating abstract concepts. But what about poets, novelists and musicians? They are clearly intelligent because they are creative. Indeed, intelligence is visible in almost every form of human activity - ability to adapt, learn new skills, form complex relationships and societies. Much of this appears to be unique to humans (at least on Earth) and differentiates us from all other species. We might say that all of these aspects of our lives and behavior can be attributed to the fact that we are conscious.
Unfortunately, there is no precise, widely agreed upon definition of the word consciousness. However, most of us have an intuitive sense of what is meant by the term. Consciousness, or cognition, is a sort of awareness - of self, of interaction with the world, of thought processes taking place, and of our ability to at least partially control these processes. We also associate consciousness with an inner voice that expresses our high level, deliberate, thoughts, as well as intentionality and emotion. It seems doubtful whether true intelligence can ever arise in the absence of consciousness. Perhaps, one might take the view that intelligent behavior is the outward sign of a conscious being. If so any machine which could display human-like intelligence qualities could be said to be conscious.
This point of view was taken by Alan Turing, who in 1950 invented a test whose result could be used to determine whether, in any practical sense, a machine could be said to be conscious. The test is quite simple. You enter a room and encounter two terminals: one terminal connects with a computer, and the other interfaces with a person who types responses. The goal of the test is for you to determine which terminal is connected with the computer. You are allowed to ask questions, make assertions, question feelings and motivations for as long as you wish. If you fail to determine which terminal is communicating with the computer or guess that the computer is the human, the computer has passed the test and can be said to be `conscious'.
Turing invented his test at a time when it was thought that mind-like computers might be only fifty years away. A whole new science was born with the aim of producing such intelligent machines - the subject of artificial intelligence or AI.
In fact that has not happened - initial efforts to create computers with mind-like reasoning have failed miserably. Many researchers now believe that part of the reason for this failure was that traditional computers function in a way very different from the brain and that the key to true intelligent machines lies in understanding in detail the functioning of the brain and emulating this with artificial neural networks.
Needless to say this view is not held by all - some philosophers maintain that the phenomenon of consciousness cannot be ascribed to purely physical processes (the cooperative firing of networks of neurons) and is in principle inaccessible to arbitrarily advanced scientific assault. This is the traditional mind/matter split advocated by the seventeenth century philosophers. There is a famous argument due to John Searle in 1980 which attempts to rebut the Turing test as a way of assessing consciousness.
In his argument one imagines a non-Chinese speaking person sitting in a room with a long list of rules for translating strings of Chinese characters into new strings of Chinese characters. When a string of characters is slipped under the door, the person consults the rules and slips back an appropriate response under the door. If the incoming strings actually represented questions (like a Turing test), then a particularly clever and exhaustive set of rules could conceivably allow the person in the room to produce outgoing strings that furnished answers to the questions.
From the point of view of a person outside, the room would seem to contain an intelligent person who is responding to the questions. But yet the person in the room has no understanding of the content of these questions - he or she is merely acting out a set of rules, translating one set of random symbols into another. In other words while we could possibly program a machine to mock up the effects of intelligence it would never be truly conscious. While this criticism may be applied to the old style of AI (rule based AI systems rather similar to Searle's Chinese room exist and have met with some success - they are called expert systems), it not clear that it truly applies to neural network based AI, since there is no real concept of a set of rules determining a response. Consciousness is not envisaged as arising out of a machine obeying a set of rules but as some as yet ill-defined property of the natural functioning of billions of neurons.