Masthead CMC Magazine December 1, 1995 / Page 13


BOOK REVIEW

The Apparent Convergence of Humans and Computers

by James Dalziel (jamesd@psych.su.oz.au)

Another View:
The Future Does Not Compute: Transcending the Machines in Our Midst
By Stephen L. Talbott
O'Reilly & Associates Inc., 1995
ISBN: 1-56592-085-6
$22.95 (USD)
502 pages

When I first read Steven Talbott's book The Future Does Not Compute, I found myself thinking more deeply than ever before about the role of computers in modern life. Here was a book that said, "Hang on a second--we really need to think about this stuff!" What was all the more interesting was realizing how little I had considered the broader issues of the role of computers in my life. As Talbott argues, it is as if we are only partially awake when we use our computers--ignorant of the negative aspects of computer use not because we are not clever enough to realize them, but just because we have never really stopped to consider them. Talbott is one of the first "voices crying in the wilderness" that I have heard who is calling for a deeper reflection.

While there are many ideas contained in The Future Does Not Compute, the central thesis of the book is about how we understand the relationship between humans and computers. The dilemma under investigation has two related parts, like the two sides of a coin. On one side, computers are becoming more humanlike as we impart our intelligence to them; and on the other, we are becoming less than fully human as we consider ourselves more and more as just biological computers. For example,

"Certainly if our computers are becoming ever more humanlike, then it goes without saying that we are becoming ever more computerlike. Who, we are well advised to ask, is doing the most changing?" (p. 339)
It is the above theme which runs throughout the book: in the analysis of on-line communication and communities, in the discussion of the rush to incorporate computers into education, in the exploration of how the electronic word differs from the written word, and in the way we use language to describe both computers and ourselves--and its reflection of our deepest, often subconscious, thoughts. Talbott implores us to recognize this process of "dehumanizing" ourselves and "humanizing" the computer, and to attempt to transcend it.

In Kevin Hunt's review in the October issue of CMC Magazine, many of the different ideas contained in the book were presented for consideration, and it serves as a useful overview of the book to those who have not read it. In this review, however, I want to focus on the book's unifying theme (introduced above), because I believe it is a very important idea for us all to consider, but one which I want to attempt to restate in slightly different terms. There are essentially two issues that require further investigation--first, the concept of computers actually having intelligence, and second, the concept of human thought and consciousness as just the computations of some sort of "wet-wired" neural computer. Those of you who have read my letter to the editor in last month's issue will know that I think a deeper examination of the role of metaphor is essential to this investigation.

Do Computers Really have Intelligence?

I would argue that fundamentally computers and humans are two radically different types of "things". Despite the early hopes of artificial intelligence (AI) theorists, to date no computer has been able to demonstrate the sort of consciousness and understanding that is characteristic of people. The difficulties presented by Searle's "Chinese Room" problem have so far proved insurmountable, despite the optimism of some. Even the Turing Test proposed in the 1950s has not been successfully accomplished. Indeed, many researchers are genuinely cynical about the possibility of artificial intelligence ever existing (this cynicism arises mainly from the problem of meaning).

When previously I mentioned that I was surprised by Talbott's unusually broad concept of intelligence, I was referring to this more strict definition of intelligence, because it is the potential "humanness" of computers proposed by "strong" AI that is precisely what is at stake. We can, and frequently do, use terms like "intelligence" when referring to computers, and even other machines. But this kind of language usage is implicitly metaphorical--the car engine that won't start isn't really giving us a hard time in some sort of conscious, intentional sense--as comforting as this thought can sometimes be! If we mean something more than just a metaphorical parallel, then we must consider carefully what exactly we mean by "intelligent machines", and be prepared to answer the problems of artificial intelligence.

These are questions which Talbott addresses to some extent, especially in chapters 18 and 23. However, half of the central thesis appears to run contrary to the above ideas, in that Talbott sees computers as genuinely becoming more like humans. It is because we impart a "shadow of our intelligence" to computers that they constitute a genuine threat. And it is here that the issue of metaphor versus actual states of affairs is crucial, because while it may appear that computers are becoming more humanlike, in the final analysis they are not--at least not until the problems of artificial intelligence are solved. However, computers are more and more able to appear humanlike: through more sophisticated imitation, through the greater complexity of functions they can perform, even, partly, through their greater prevalence as cultural icons in society. But the essential distinction is between the metaphor of computer intelligence and its actuality, and the book treads an uneasy line along the boundary between metaphor and reality. Consider, for a moment, the following quotes:

"Unless we can recollect ourselves in the presence of our intelligent artifacts, we have no future." (p. vii)

"Scholars and engineers hover like winged angels over a high-tech cradle, singing the algorithms and structures of their minds into silicon receptacles, and eagerly nurturing the first glimmers of intelligence in the machine-child." (p. xi)

and even,
"The technological Djinn, now loosed from all restraints, tempt us with visions of a surreal future. It is a future with robots who surpass their masters in dexterity and wit; intelligent agents who roam the Net on our behalf, seeking the informational elixir that will make us whole. . . . Not all of this is idle or fantastic speculation, even if it is the rather standard gush about our computerized future. Few observers can see any clear limits to what the networked computer might eventually accomplish. It is this stunning, wide-open potential that leads one to wonder what the Djinn will ask us in return for the gift." (Back cover)
These quotes strongly imply that computers and humans share (or will share) an equivalent sort of intelligence. As you may notice, though, these three examples come from the contents section and back cover of the book. While they are based on parts of the book itself, it appears that they are designed more to catch the reader's attention than to explore the complexity of Talbott's ideas. However, there are other examples from within more crucial sections of Talbott's argument that do not exhibit the same hyperbole, but nonetheless allow for a computer intelligence that is like human intelligence. For example:
"Yes, our artifacts gain a life of their own, but it is, in a very real sense, our life." (p. 60)

" . . . the computer runs by itself with an attitude." (p. 96)

"The more intelligence, the more independent life, the machine possesses, the more urgently I must strive with it in order to bend it to my own purposes." (p. 131)

"What we meet in the computer is . . . a consciousness that has contracted to a nullity . . . " (p. 243)

" . . . the computer's evolution towards unbounded intelligence can proceed on the strength of the programmer's continual effort to analyse meanings into rational end products." (p. 315)

To be fair, each of these quotes needs to be read in context to grasp the fullness of the ideas presented. But Talbott's general point here is that we do impart something of our intelligence to the computer, and that this is increasingly the case in our modern world. This is one side of the convergence of computers and humans that Talbott calls us to consider carefully.

However, I do not share quite the same view. I don't think that computers are evolving into something that is actually different to previous human creations by being actually comparable to us (regardless of the sheer number of networked computers on the Internet, or the sheer processing power of the latest supercomputer). I cannot see any fundamental shift that has occurred in computers' abilities to process information since the very earliest machines of the 1950s that have allowed them to transcend "computation" in such a way as to become like us (neural networks, while perhaps promising, have so far failed to deliver, just as the early programs did). I think the problem of regarding computers as intelligent machines arises from within ourselves, rather than from some ability that computers have recently demonstrated which has forced us to reconsider our ideas about them. That is, we have been anthropomorphizing computers -- metaphorically considering them to be human. A version of this idea appears within Talbott's book,

"On the other hand, we see an apparent compulsion to treat our machines as subjective crystal balls in which we can discern the human future. This is part of a broad willingness to anthropomorphize the machine -- to transfer everything human, including responsibility for the future, to our tools. It is easy to forget that such anthropomorphism is a two way street. If we experience our machines as increasingly humanlike, then we are experiencing ourselves as increasingly machinelike. The latter fact is much more likely to be decisive for our future than the former." (p. 128)
I believe it is a mistake to think that the computers have actually become more humanlike. But it is extremely plausible, indeed almost certain, that many people are experiencing computers as increasingly humanlike. And it is this that Talbott warns us against, because viewing computer intelligence in this way involves both an elevating of computers and a lowering of ourselves. While this "lowering of ourselves" is the second part (or flip side of the coin) to the book's central ideas, I think in the end it contains the entire problem.

Are We Really just like Computers?

The second part of Talbott's warning could be stated as follows: we have allowed our understanding of what it is to be human to be consumed by the computer metaphor. We, as human beings, are much more than merely biological computers, but this "computational self-image" has become so prevalent that Talbott argues it actually limits our understanding of the world, each other, and most destructively, our own selves. In focusing on only those aspects of the mind that are most like computer processes, such as logical thinking and decision making, descriptions of what it is to be human are reduced to descriptions that mirror those of computers. So by defining humans in computer-like terms, it comes as no surprise that we also appear to have converged with computers as much as they may have seemed to have converged with us.

However, despite the prevalence of "processing" in modern conceptions, it is clearly lacking as a full description of the human condition. Issues such as emotion and drives, awareness and self-reflection, the quest for meaning, even social life and perhaps spirituality all illustrate the shortcomings of a narrow cognitive approach. While some people would argue that these can be either consumed or subordinated by the computational paradigm, it is precisely this attitude of privileging computational metaphors over other modes of explanation that Talbott says we need to reconsider. The ideas he presents from Owen Barfield on the use of language and the importance of a careful consideration of the role of metaphor in our own thinking are a valuable and new contribution to this problem.

As Barfield would have pointed out, had he been around to see it, it is hard to miss just how extraordinarily prevalent computer metaphors have become in everyday language. It also would not have been surprising to Barfield that "Cognitive Psychology" is arguably the dominant paradigm of modern psychology, and that the concepts of information processing, logic-based analysis of problems, and so on have become exceedingly popular. It is here that Talbott argues that not only do we need to reconsider computer metaphors as an explanatory system of human consciousness, but indeed that we need to awaken from this kind of limited view of ourselves. To describe certain aspects of thinking as being computerlike is one thing, to reconceive the self in purely computational terms is another.

In all of this, I believe that Talbott makes a very important contribution, not just to the ideas of the human/computer debate, but to the very way we even think about the issues. It is to these points that I wish to connect my earlier thoughts about just what computers really are, compared to how we regard them. While I believe that computers have not shown evidence of any genuine humanlike intelligence (and certainly not from those we are most familiar with--desktop PC's), that does not mean that we cannot slip into mistakenly thinking that they do. Indeed, I think it is this kind of projection from within ourselves that is at the very heart of the problem. And it is not just our "thinking" that we project onto computers, but desires and emotions as well. Thus we project the supporting or menacing appearance of computers from within ourselves. It is not that computers are "anti-human" or "pro-human", but rather that they are utterly indifferent. To say they don't care is almost misleading because they can't care in the first place. But we project from deep within ourselves onto the computer our hopes, dreams, fears and loathings. No wonder the Internet is prone to both rampant optimism and pessimism--it has really become a focus for our own subconscious.

So the problem we began with, that of the "humanising" of computers, and the "dehumanising" of ourselves can be seen not as two independent processes in modern life which together impinge on our existence, but really as two forces both arising from within ourselves. I think that Talbott at times misses part of this problem by focusing on the external object--the computer--and making it responsible for much of our ills, rather than looking within to our own psyche, and its different ways of dealing with the external object. This may be part of the reason Talbott feels at a loss to prescribe a remedy for our dilemma. But if we focus entirely on ourselves, and locate the work of change and reflection as internal, I think we locate the problem at its source, and it is a source we can do something about. And this change involves two dimensions, seeing our projections for what they are--internal, not external; and also seeing ourselves as not just computational, but as this and much, much more.

Conclusion

I believe Steven Talbott's book The Future Does Not Compute is an important book for our times. It contains many other perspicacious insights into the role of computers in our thinking about community, education and language which I have not had space here to discuss. I also believe that the central ideas demand deep personal reflection by all those who regularly use computers. I hope that my comments may aid in this reflection by making clear where we should look if we are to address both aspects of the problem--within ourselves. [CMC TOC]

James Dalziel is an Associate Lecturer in the Department of Psychology at the University of Sydney, Australia. His interests include the psychological aspects of CMC that require more than just "cognitive" or "information processing" accounts to explain them--topics such as addiction, relationships, and projection.

Copyright © 1995 by James Dalziel. All Rights Reserved.


This Issue / Index / CMC Studies Center / Contact Us