Thinking Machines

This is the text of a talk which I delivered to the Objectivist Club of Michigan in September of 1995.
Intelligent machines are a basic device of science fiction. We've seen HAL in 2001, the robots in Asimov's stories, Mike in The Moon Is a Harsh Mistress, Anson Guthrie's mind in a neural net in Harvest of Stars, and many others. The idea has been brought up in serious study as well. Alan Turing, Marvin Minsky, Seymour Papert, and others have explored the idea of a machine that could think.

Is such a thing really possible? Can a machine think? To answer that, we have to define our terms.

The term "machine," as I use it here, means something constructed, which operates according to a design purposefully built or programmed into it. Normally when people talk about a machine thinking, they are referring to a computer: a digital device capable of manipulating symbols according to a stored program.

The term "thinking" refers to an activity of consciousness. It means the purposeful use of logic and concepts to achieve awareness of some conclusion. Without consciousness, we can have calculation, but not thinking.

Given those definitions, I can rephrase the question: Can a device be conscious as a result of its symbol-manipulation capability and programming? My answer to this is "no."

To explain my answer to this question, and to relate the issue to Objectivist philosophy, I'd like to start at the root: Rand's three axioms. In John Galt's speech, Rand wrote: "Existence exists--and the act of grasping that statement implies two corollary axioms: that something exists which one perceives and that one exists possessing consciousness, consciousness being the faculty of perceiving that which exists."

Consciousness is an aspect of existence -- there isn't anything else -- but it has a unique relationship to existence. It's not something that we can perceive directly; each of us is directly aware, by introspection, of one consciousness, and we deduce the existence of other minds from the similarity of their actions to our own.

Two views of the mind have been common in philosophy; these may be called the spiritual and materialist views. The spiritual view holds that consciousness is a separate substance which coexists with the body. In most versions of this view, it controls the body's operations and gains sensory information from it, but it isn't really part of it. Consciousness possesses free will and is exempt from the mechanistic causality of matter. The spiritual view is often associated with religion, but can be held independently of arguments for a God.

The materialist, mechanistic, or reductionist view holds that what is called "consciousness" is simply the organizational and informational capacity of a living being, or potentially even of a machine. It isn't a special substance, but simply the result of having complex information-processing mechanisms that allow specialized organs to store information about the world and to act in an effective way. It's completely determined by physical laws, and there's no room for free will.

The spiritual and mechanistic views of the mind often are treated as if they were the only alternatives. For example Pamela McCorduck's book Machines Who Think states:

Presently no complete, coherent model exists that explains all aspects of mental behavior, but most researchers are agreed: there's no ghost in the machine. Everything from symphonies to simultaneous equations to situation ethics is finally produced by those electrochemical processes. This view can be considered mechanistic.

Effectively, she's saying there are two alternatives: either there's a "ghost in the machine" or our minds operate mechanistically. The idea of consciousness as being neither mechanistic nor mystical is simply not within the range of alternatives she considers.

Understanding the mind requires a method appropriate to the object of study. This method must be distinctive because the object of study is also that which does the studying. Putting it another way, the mind is that which observes rather than that which is observed. A person observes his own mind not through his senses, but through introspection. For some people, this makes the character of thought very troublesome, since it cannot be subjected to scientific tests that are independent of the observer.

The fact of one's own consciousness is impossible to deny without self-contradiction. But what exactly am I recognizing when I say that I think? Am I identifying a special substance that resides in my body, and which constitutes the essence of my awareness? Or am I referring to an information-processing mechanism which generates the actions of my body in response to the data gathered by my senses?

What I am recognizing is a relationship; the entity which is myself exists in a certain relationship to something outside myself, namely the relationship of being aware. The fact of consciousness is the fact of a relationship between an entity and that outside it, or between an entity and an aspect of its own existence. The relationship is not necessarily one of direct perception. It can include recollection, imagination, and introspection.

It would be an error to equate this relationship with a change in only the physical state of an entity. If consciousness is identified as a physical state, or as a change in such a state, then the observation of that state becomes a fact that has to be explained. Explaining it by reference to another physical state leads to circularity or infinite regress. When we move the observer into the realm of the observed, we still are left with something doing the observing. We can't escape the irreducible fact of what Gerald Edelman calls qualia, the experiences, feelings, and sensations of the individual.

When we take these considerations into account, we can see that the two traditional views of consciousness which I mentioned are flawed. The spiritual view introduces a new substance without justification. There is no reason to suppose that the relationship between the observer and the observed entails a separate entity, substance, or soul that does the observing. Occam's razor dictates that entities should not be multiplied beyond necessity; on this basis, all that may properly be assumed is that the body is built such that it (or a part of it) can be aware of the world.

The materialist view commits the error of reducing observation to an observable phenomenon. This approach treats consciousness as a stolen concept, since it relies on the fact that there is an observer distinct from observed phenomena, yet denies this distinction. Without the existence of an observer, there cannot be anything observable at all.

These two views can be characterized in another way: as intrinsic and subjective views of consciousness. The spiritual view regards consciousness as self-contained. This implies that not everything in consciousness is acquired from external reality, and provides a groundwork for the concept of innate ideas. The materialist view regards consciousness as a concept beyond the reach of meaningful discussion at best, and arbitrary and mystical at worst. The alternative to these, the objective view, presents consciousness as something which is grounded in reality because reality is what it perceives. The idea that there is anything mystical about consciousness, whether presented with hope by spiritualists or with horror by materialists, is unfounded.

One form of the materialist view is the idea that thinking or reasoning consists of the manipulation of data. According to this view, thought is the processing of data, and knowledge is stored information. This view commits the stolen-concept fallacy: it tacitly assumes that there is an observer, not just an information processor, in order to make its terms comprehensible. Specifically, it assumes that data and information possess some meaning, and that their processing results in the creation of other meaningful information. When you read words or graphics on a computer screen or printer, the words have some meaning to you. Can they have meaning by virtue of the fact that you process them in your brain and transform them to something else? Then the question is only deferred; we have to ask how the output of that process can have meaning.

A monkey playing with a typewriter might type "F = ma"; but this would just be a chance arrangement of marks on paper, not an expression of the relationship among force, mass, and acceleration. It might equally well have typed "F = m/a". We could say that the monkey's brain doesn't contain the information which is contained in Newton's equation, so it can't transfer any information about it to the typewriter. The brain of a human who's studied basic physics does have this information.

But what makes it information? How do we distinguish "F = ma" as an informative equation from "F = ma" as a chance string of characters? We can do so only by making reference to the identification of facts. "F = ma" is information only when it ultimately originates from knowledge of reality. The source doesn't have to be direct. To give a simple example, an algebraic manipulation program could take the expression "F = ma" and derive other valid equations from it, such as "a = F/m", which tells us that the acceleration of a force on an object is equal to the magnitude of the force divided by the object's mass. In this hypothetical example, no one had to think of this fact for the computer to generate the equation that expresses it; but before it could be produced as information, someone had to be know that the original equation expressed a truth and that certain algebraic operations on such an equation are valid.

So the view which equates knowledge with information, and thinking with information processing, is invalid. Information processing cannot by itself constitute thought. Information is meaningful only with respect to a being that observes and understands. Having covered this ground, I can now come back to the question which I asked earlier, and expand on my answer: "Can a device be conscious as a result of its symbol-manipulation capability and programming?"

Symbol manipulation does not equal thought, which is an activity of consciousness, and there is no other aspect of consciousness which symbol manipulation or programming can reasonably be considered equivalent to. Any symbols which a computer produces or manipulates are meaningful only with respect to a mind that is aware of them.

This point is very effectively illustrated by a thought experiment described by John Searle. He gives the example of a hypothetical computer program that provides replies to questions about a story which it has been given. The story, the questions, and the answers are all in Chinese. Now any operation which can be performed by a computer can be performed by hand, although it may take much longer. So let's suppose that a human being who knows no Chinese is put in a room, and that he carries out the same operations which the computer would perform, pushing counters around by hand. Doing this would, I should point out, be intolerably slow; but the point is that this is doable in principle.

If we grant that the computer "understands" Chinese, then we would have to say that the human doing these manipulations also understands the language. But in fact he doesn't. Understanding isn't simply the carrying out of a procedure.

In citing Searle's example, I'm getting slightly ahead of myself. Searle is relying on the reader's knowledge of an earlier article about having a computer imitate a human, and cleverly turning the idea around. The article which he's implicitly referring to is one by Alan Turing, published in 1950, titled "Computing Machinery and Intelligence." No discussion of the idea of thinking computers can be complete without talking about this article, which presents the famous "Turing test."

Turing starts off well:

I propose to consider the question "Can machines think?" This should begin with definition of the terms "machine" and "think."
In defining "machine" for purposes of this discussion, Turing has done superlatively; his descriptions of the abstract nature of a computing machine are still definitive. But he fails to provide a definition of thought. Indeed, he denies the possibility of providing a satisfactory definition. He recognizes that if common usage is the sole guide to defining the word, then deciding whether machines can think is reduced to "a statistical survey such as a Gallup poll."

This type of problem is common in scientific discussions. Terms in normal usage are often fuzzy, and it's necessary to produce definitions which are clear and unambiguous in order to make precise statements. Producing good definitions is part of the business of science. Turing, however, makes no attempt to define thought. Instead, he proposes to "replace the question by another, which is closely related and is expressed in relatively unambiguous words." His new question is based on what he calls "the imitation game." The object of this game is to determine whether a computer can imitate a human being in a dialogue.

In modern terms, this test might be set up by providing an interrogator with two terminals. One is connected to a computer; the other is connected to another terminal operated by a human being. The interrogator doesn't know which terminal is connected to the computer; his object is to guess by means of conducting a dialogue with each of the "contestants." He can ask any questions he wants, on any subject. Then the new question, which replaces the question "Can machines think?" is the question of whether the computer can get the interrogator to decide wrongly part of the time.

Even at this point, Turing's criterion isn't precise. He first phrases the test in terms of a man and a woman rather than a man and a computer, then asks, "Will the interrogator decide wrongly as often when the game is played like this [meaning between a human and a computer] as he does when the game is played between a man and a woman?" How often does the computer have to succeed in order to meet this criterion? The success of the computer will depend greatly on the qualifications of the interrogator. A naïve interrogator might be fooled very easily by a mediocre computer program; a really good one might be able to respond to subtle differences that no one else would notice.

Not only is success in the test undefined, it is not presented as a test for anything. It is not a test for whether computers can think; Turing has not defined the terms for that question, but has only "substituted" the imitation game for it. Indeed, he says, "The original question, 'Can machines think?' I believe too meaningless to deserve discussion." Then in what sense is the imitation game a substitute for it? His answer is that words will, at some future time, be used in such a way that the two questions will be equivalent:

Nonetheless, I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.

Well, it's the fin de siècle now, and I haven't heard anyone talking about their PC's thinking except in an obviously metaphorical sense. In any case, deciding a question by linguistic usage is hardly scientific. It really amounts to the "Gallup poll" which Turing started out wanting to avoid.

Turing is engaging in an error which is common among people who want to be "scientific": his approach suggests the belief that concepts of consciousness cannot be dealt with in a meaningful way, and can be scientifically considered only by treating them as equivalent to a mechanistic model of their behavior. In order to avoid the problem of dealing with consciousness, he wants to treat people and computers as black boxes whose differences can be enumerated by comparing their input-output behavior.

But what happens inside the box is vitally important, especially when we live inside one of the boxes. Our concern with our own lives results from the fact of our self-awareness. Our concern with the lives of those we deal with and care about is heavily predicated on the fact that they don't just act, but experience the world just as we do. And experience is not the same as input-output behavior.

Another fallacy which is implicit in Turing's test is a form of rationalism. Rationalism, as Leonard Peikoff has described it, is the notion that knowledge can be achieved simply by deducing everything from some set of premises. Rationalism divorces reason from experience. It treats thinking as calculation, equates concepts with definitions, and tries to deduce the world from first premises. Turing's test is based on a kind of rationalism in that it assumes that thinking, or the effective equivalent of, can be produced by taking a set of data and doing operations on it. This suggests an image of Man the Calculator, of the brain as a computer that operates on a set of stored data. The idea that the brain is simply a kind of computer has been largely taken for granted by the artificial intelligence community.

Turing wrote of his test that it "has the advantage of drawing a fairly sharp line between the physical and the intellectual capacities of a man." But this separation is impossible. Our consciousness grows out of our physical existence. Try to imagine yourself in a state of permanent sensory deprivation, cut off from the world from everything but a channel by which you can receive words and send other words out in response, unable to feel or move. You wouldn't be able to keep your sanity. Our physical being -- our senses, our perception of ourselves as being something and somewhere, our ability to move and act -- are intimately tied to our consciousness.

The rationalist view long predates Turing, of course. It had its strongest expression in Leibniz, who might be considered the philosophical father of artificial intelligence. Leibniz wrote:

It is manifest that if we could find characters or signs appropriate to the expression of all our thoughts as definitely and as exactly as numbers are expressed by arithmetic or lines by geometrical analysis, we could in all subjects, in so far as they are amenable to reasoning, accomplish what is done in arithmetic and geometry.
All inquiries which depend on reasoning would be performed by the transposition of characters and by a kind of calculus which would directly assist the discovery of elegant results. We should not have to puzzle our heads as much as we have today, and yet we should be sure of accomplishing everything the given facts allowed.

If Leibniz ever imagined a device like today's computers, no doubt he would have considered it the perfect tool for carrying out this plan. But Leibniz, like Turing, failed to realize that understanding isn't reducible to a calculus.

The idea of thought as information processing or as an input-output process suggests the idea that thought is some kind of abstract process which exists apart from any particular physical thing. In fact, some science-fiction writers have presented the idea of a human mind being transferred into a machine -- for instance, Poul Anderson's novel Harvest of Stars. One response which has been given to Searle's Chinese Room scenario is that the understanding of Chinese lies in the system as a whole -- the person and the calculation rules which he is using.

But if we accept this idea, then we're accepting the idea that awareness can exist apart from physical entities, that it's something which belongs to a system of rules. This is truly the idea of "the ghost in the machine": mind independent of physical embodiment, existing anywhere that sufficient computational power takes place to carry out its rules of operation, transferable from one such computing system to another.

So we end, in perfect dialectic fashion, with the spiritual and the materialist views of consciousness meeting and becoming one. The information-processing version of materialism ends up agreeing with the spiritual view of consciousness: minds can exist in and of themselves, apart from physical bodies.

The alternative to both of these is the recognition of the unity -- though not identity -- of mind and body. Consciousness grows out of physical existence in a living being, and our minds have evolved along with our bodies. They exist only as the distinctive relationship which we call awareness between our bodies and the world.

Philosophical thought is gradually moving in the direction of recognizing this fact. When I was a college student, around 1970, only a few renegade thinkers such as Hubert Dreyfus challenged the idea that the brain is simply a powerful computer. Today writers such as Roger Penrose and Gerald Edelman are contributing to the challenge. Artificial intelligence work has been moving away from attempts to pass the Turing test, and becoming more a matter of devising computer heuristics to solve interesting problems.

This shift is encouraging because it allows us to understand human thought for what it is and computation for what it is, and to recognize that the two are not the same, even potentially. The knowledge that the human brain is not a computer -- and that we can recognize this without adopting any form of irrationalism -- is vital to understanding ourselves. The wrong turn of AI may prove instructive in the long run, since in discovering its errors, we can recognize more fully that our reasoning capability is a part of our overall identity as living beings, and not a separable piece of software.


Some links of possible interest:
Copyright 1995, 1998 by Gary McGath
Last revised January 2, 2003 (some typos fixed)

Return to "The Book of M'Gath"
Return to Gary McGath's home page