Life and How to Make it
THE TURING TEST
Turing devised an experiment to judge whether a hypothetical computing device could fairly claim to be intelligent. In the test, an operator conducts two conversations by typing questions into a terminal. One conversation is with a human being and one is with the machine, but the operator is not told which is which. If the operator cannot tell which of the two is the human, the computer is deemed to have passed the test. Such a test has actually been carried out a number of times now and, perhaps surprisingly, some AI programs have been sufficiently convincing that the operator has been fooled, at least for a while.
Simple stored sentences, regurgitated automatically in response to certain key words in the question, can quite easily fool people for a short time. But this is like assuming that a book of multiplication tables can actually multiply. Ask the tables a question beyond their limits, or conduct a conversation with a computer program for long enough, and you can see that regurgitating stored knowledge on cue is not the same thing as intelligence.
Here is another little test of intelligence that I find more appealing. Turing and the early AI pioneers frequently cited the ability to play chess as a test case for intelligence. So, imagine a high‑powered AI chess‑playing computer, like IBM's much famed Deep Blue. Also imagine a rabbit. Now try to visualize what happens if the rabbit is asked to play chess against the computer. It turns out that rabbits are really not very good at this ‑ the queen's opening gambit gets them every time, for example. On this reckoning, Deep Blue is very much smarter than a rabbit. But now imagine dropping them both into a pond. In my view, the one that is really the most intelligent will be the first to figure out how to avoid drowning!
Intelligence involves a great deal more than the ability to follow rules (which is what a chess‑playing program does). It is also the ability to make up the rules for oneself, when they are needed, or to learn new rules through trial and error.
It is true that chess computers are handicapped by their lack of any means of propulsion, so that in the above scenario drowning is, for them, the only option. Nevertheless, even if Deep Blue had been given flippers it could not save itself unless its designers had explicitly programmed it to swim and told it when to do so. The intelligence would thus belong in the minds of the programmers, and only the end result of that intelligence, encoded as a set of explicit rules, would reside within the computer. Rabbits, on the other hand, will recognize the warning signs of imminent doom, try an assortment of movements, and quickly learn to repeat and perfect any actions that seem to help. Life finds a way to survive. Computers simply drown, and they neither know nor care that they are doing it.
So modern computers, even when programmed by AI experts, are really not very bright. 'Smart' might be a better word, but I think to call them intelligent is just debasing the term. People don't yet routinely talk of computers thinking, except in a metaphorical sense. In a way, fifty years of AI research finally failed its own Turing test last night, on 31 December 1999, when Alan Turing's prediction ran out of time. Why was this? Well, it was certainly not Turing's own fault. He was a brilliant man, with thoughts far ahead of his time. Even many computer scientists don't know that he also experimented with other forms of computation that might, had he lived, have had a far greater influence than the digital computer. But Turing and his wonderful machine started people travelling down a path that led the wrong way.
Paradoxically, part of the reason that AI has failed so far is its very success. The problem with all fields of research is that people are impatient. If a particular line of enquiry seems to be making progress, we continue down that line, but if it seems to be getting nowhere, we abandon it. This is a problem, because the route to the future is often tortuous. Things seem to be moving towards the goal but then unexpectedly snake off in the wrong direction. Initially unproductive approaches can often turn out to be the only ones that lead to the desired destination. Turing, for example, had three brilliant ideas about computation. These three might be characterized as 'organized machines', 'unorganized machines' and 'self‑organizing machines'. My feeling is that the last two hold a great deal more promise than the first, but that first idea was so stupendously successful that it eclipsed the others more or less completely for nearly half a century. Turing's unorganized machines are what we nowadays call neural networks, while his self‑organizing machines explored one of the processes that may help to explain how a simple, undifferentiated egg cell grows into a complex adult organism. The organized machine, which set the tone for the study of llfelike and mind‑like processes for years to come, was the digital computer.
I really have nothing against computers. There are half a dozen personal computers in my house and countless microprocessors working behind the scenes and I adore every one of them. I've spent twenty‑five years programming them and can remember the microprocessor when it was but a mere lad in short (4‑bit) trousers. The very idea of computers ‑ the look of them, the culture associated with them and above all their amazing capacity to create whole other universes out of simple arithmetic ‑ all these are intoxicating. The digital computer is the most masterly invention of the twentieth century, if not the second millennium. But it is still an organized machine. It was a tractable idea that showed great success in the early stages (say the first forty years), but as far as making living, thinking beings is concerned, it is a bit of a dead end.
In essence, the problem is that the digital computer was modelled on the outward appearance of mental processes, rather than the structures that give rise to them. Even though we know our brains consist of vast numbers of neurones operating in parallel, we each appear to have only one mind. This mind seems to operate in a stepwise way, thinking about or carrying out sequences of actions one at a time. We also get a sense that our conscious thoughts are at the top of a chain of command ‑ we take the big decisions consciously, but then delegate the task of carrying them out to some lower, subconscious parts of our brains. The mind therefore gives us the impression that it is top‑down (employs a chain of command), serial (only one mind per brain, operating one step at a time) and procedural (works in terms of logical procedures to be followed, as in a recipe).
The digital computer is similarly a serial machine because it only carries out one operation at a time. It is procedural because the basic units of a program are actions to be carried out (such as 'add these two numbers and store the result here'). It is also top‑down, since computer programmers tend to design their programs as control hierarchies ‑ a central program carries out commands by issuing orders to subroutines, which in turn invoke subordinate routines to handle the finer details.
The computer was designed as a model of how the mind seems to work, and the operation of a computer program was assumed to be very similar to thinking. Yet there are flaws in this logic. For one thing, there is a potential pitfall with the top‑down approach. In any chain of command the buck stops with the person at the top, and with a computer program it is all too easy for the buck to stop, not with the top level of the program, but with the programmer. In other words, what seems like intelligent behaviour initiated by the machine (for example the ability to play chess) is often just the stored intelligence of its designer, regurgitated on cue. Also, it doesn't follow that copying the outward appearance of something is the same as recreating the thing itself. Statues are not people: they just look like them. Sooner or later, the mask is bound to slip and the deception will be exposed. Finally, it is really only philosophers and mathematical logicians who would believe that thinking amounts to the formal manipulation of symbols according to set rules. Most of the time the rest of us don't think in neat syllogisms or conduct formal arguments in our heads. More often than not the answers just occur to us in some mysterious way, and we use logic only in retrospect as a means of justifying our conclusions to others or to ourselves.
So the digital computer was in many ways the wrong tool, applied to the wrong job. Ironically, though, this most organized of machines is such a powerful concept that it can actually get aronnd its own limitations, but only if one thinks about it in the right way. This book is very much about how to turn the prim, tightly organized digital computer into a disorganized, self‑organizing machine. We shall use the serial, procedural, top‑down computer as a tool to create new machines that are parallel, relational and bottom‑up. In this direction lies the goal that Turing sought, and I hope he would have approved.
pg 22 The art of stating the painfully obvious
I suspect that the early pioneers of lifelike artificial systems were really only reflecting the spirit of their time. They began their work immediately after the Second World War, in an environment dominated by huge, top‑down military organizations and a political environment based squarely on command and control. There was still a rather Victorian attitude to technology, in which machines were seen as a way to dominate and conquer nature. The science of the period was also very elemental, trying to pare the world down to its bare essentials and hoping to explain everything in terms of almost nothing. What is more, these people had grown up in the 1930s, a time of utopian dreams when a sterile and impersonal brave new world actually seemed like a good idea.
Today we are beginning to see things rather differently. Top‑down is giving way to bottom‑up as corporations downsize and outsource, totalitarian states crumble, and the Internet begins to make democracy and individual control a reality. Our fear that we are damaging the environment has changed the way we view our relationship with the natural world, too. We are starting to learn how to work alongside nature rather than against her. Even our science is changing, as chaos theory and complexity theory help us to understand whole systems in ways that we never could when we looked only at their parts. So topdown is being replaced by bottom‑up, procedures are now seen as less important than relationships, and our ability to cope with things that are complex and parallel (paradoxically, thanks to the invention of the computer) reduces our desire to serialize and simplify them. In short, I believe we are undergoing what Thomas Kuhn called a paradigm shift.
We are starting to look at the world in a different way, and the consequences of such a change of viewpoint can be profound. Paradigm shifts take place when people start to question their previously unspoken basic assumptions. For example, Isaac Newton helped to precipitate a paradigm shift when he pointed out a few basic facts about how objects behave when they are moving. The resulting theory of mechanics changed the world immeasurably. These things appear blindingly obvious to us now, and it seems inexplicable that no one thought of them before. After all, Newton's world‑shattering laws of motion can be paraphrased simply enough:
One: if you push something it will keep moving until something stops it.
Two: the harder you push something, the faster it will accelerate.
Three: it hurts when you kick things because they kick you back just as hard.
Why it took centuries for anyone to work this out is a mystery. Yet once ideas like this had been stated explicitly, they revolutionized the whole of human thought and changed all our lives. But that's how it is with paradigms ‑ you don't realize when you're stuck in one. For example, if your whole world revolved around the idea of divine intervention and the notion that the way things behave is up to God to decide, then having someone like Galileo stand up and insist that things universally fall at the same rate, regardless of what they are made of or what God intended them for, was bound to be a bit hard to swallow.
In many ways, philosophy is the art of stating the obvious, but until someone actually stands up and says so, most of us continue to leave our assumptions unquestioned. The unquestioned assumptions that underlie the past fifty years of AI rescarch, not to mention several centuries of thinking about the nature of life and mind, are many and varied. Yet things are starting to change.
I have listed below a few of the assumptions that I think some of us are at last beginning to call into question. I believe that our understanding of life, of what we are and of how to build artefacts that share these properties, will only progress when we abandon the old axioms and learn to look at the world in a new way. The rest of this book is a series of linked essays that I hope will help to shed a glimmer of light in the right direction. As I stand here on the first morning of a new millennium, the early light of a new way of thinking about things is already starting to break, not just in AI or neuroscience, but also in politics, economics, engineering and just about everywhere. The twilight stretches back into the past century, but in Turing's day the way forward still lay in shadow. We stand now on the verge of a new century and a new paradigm. But, like all paradigm shifts, first we have to learn to let go of some of our most cherishod and unquestioned notions.
Conventional wisdom <------> My contention
Minds can be explained in terms of physics.
Physics does not even understand matter properly, and is ill-equipped to understand mind.
Computers can be intelligent.
Computers can create spaces in which intelligent things can be built.
Control is synonymous with domination.
Control is as much an effect as a cause, and the idea that control is something you exert is a real handicap to progress.
Intelligent systems must be designed from the top down.
Intelligent systems must be designed to emerge from the bottom up.
Intellect and intelligence are roughly the same thing. The ability to reason can be separated out and implemented in isolation from other modes of thought.
Intelligence is first and foremost about common sense. Reasoning (which is only one of many aspects of intelligence) must be built upon a foundation of common sense.
Intelligence is independent of life.
A system will not be intelligent unless it is also alive.
Intelligence is a unified process and can be implemented directly as an algorithm - a sequence of logical steps.
Intelligence is a property of a populations. Although we seem to have a single stream of consciousness, it can be reproduced only through a parallel process.
The best way to design a machine that machine that thinks is to examine the structure of thought.
The best way to design a machine that thinks is to examine the process of biological systems and the behaviour of mechanisms that lie much deeper than conscious thought.
HOME BOE SAL TEXTE