Mark Ward
VIRTUAL ORGANISMS
Pan Books 1999
pg 146

Keywords: Mind-Body Problem - Descartes - Thought - Human Thought - top-down - Intelligence - grounding - thinking machines - symbol manipulation - world model - situatedness - embodiment - meaning

THE MIND-BODY PROBLEM

... the problem was, and still is, that no one could work out how the physical parts that make up our body could give rise to something as insubstantial and spiritual as thoughts. Even if, as some reasoned, our thoughts were something nonphysical such as the soul, there still remained the problem of explaining how such soul-stuff could interact with the physical body. If one was physical and the other not, how could they affect one another.

For long time research into the qualitative differences between the solid, fleshy body and the spiritual mind involved a philosopher sitting in an armchair and thinking really hard. They did this because they assumed that we have a uniquely privileged access to our thought processes. Because of this they thought that the best way to study the workings of the mind was to do a lot of thinking and see how it felt. This attitude still prevails among many philosophers and psychologists. The scientists find it hard to believe that they can be deceived about the way that our minds work. But there is a growing body of evidence that shows we are diluded about what we see at all times and that we often know nothing about the way our brain really works.

Back in the 17th century the thought that humans could be deceived by their own brains was heretical and thinking seemed an appropriate method of research.

One of the best explorations of the mindbody problem came about this way. The Meditations of René Descartes were published in 1641 and are written from the perspective of a man, Descartes, sitting and thinking about what he can believe, what he can be sure about. He eventually concludes that he can be sure of nothing but the fact that he is thinking and therefore that he exists or "cogito ergo sum".

For himself, Descartes was convinced that the soul or mind was the one thing that set man apart from the animals. All those living things without a soul he regarded as a mere machines, on a par with clocks. For him the screaming and yowling of an injured dog, cat or cow was akin to the grinding of gears in an engine.

In the three centuries between Descartes (...and today) opinions about about bodies and minds changed as our knowledge of the workings of the brain grew.... but one thing has remained the same. With very few exceptions everyone studying minds and brains has begun with the human brain and work from the top-down.

There were good reasons for doing this. For 17th century philosophers it makes sense because humans were the only creatures with souls and therefore they only ones with the mindbody problem. In the 20th century interest still centred on human brains because the problem remains, albeit in different form. Few people are now looking for ways to explain how the soul can interact with the brain, now it is a quest to "explain cleverness..., in terms of suitable orchestrated throngs of stupid things". The human cortex is much deeper and has far more folds than that of any other animal. By beginning at the top and working down it was thought that it would be easier to expose and extract the essence of human thinking and then replicate it elsewhere.

Early AI researchers were not naive enough to study the brain as one single entity. The billions of neurons it is made of a daunting enough today and were much more so in the 1940s and 50ies. The technology of the time is not up to making a robot see or hear, so they concerned themselves with abstract realms of thought such as chess playing or geometry. They did this partly because an ability to play chess well was seen as a considerable intellectual challenge and partly because they thought that success was likely to come quicker if they did not have to engage the world when building thinking machines. Recognising the scale of the task they were setting themselves they decided to tackle it in piecemeal fashion. This fitted with their conception of how the brain worked. They conceived different functions such as language and vision to be situated in separate areas of the brain. Because of this, trying to replicate the thought processes in isolation did not seem too great a crime. To make it even easier they decided to tackle those aspects of intelligence that were entirely internal and needing no connection with the outside world.

... the brain was primarily a really good searching mechanism.(AI researchers) thought that human cognition, in the abstract realms they decided to study, was a matter of searching through all the possible explanations for an event and choosing the one that fitted best or made the most sense in the circumstances.

With chess this would be a move that took the player making it closer to victory, in geometry closer to producing a proof. The history of AI for the 30 or so years following the Dartmouth conference (1956) can be summed up as a quest to find out how human choose between possible explanations for an event.

If it proved impossible to emulate human search mechanisms then these researchers had a second goal, which was to create methods of scanning through a huge list of alternatives that approached humanlike levels.

Often this searching for answers involved comparing what the computer was being told of some internal memory storm made up of symbols. Once the computer knew what it was dealing with it would work out a plan of action and then carried out. Then it would take more input from its sensors, to see how the world had changed all how it's just opponent responded, and the cycle would begin again.

The early AI researchers had good reasons for doing this. They thought the brain worked in the same way. It was widely believed that inside the head of every person was a miniature model of the world. In this model world everything was represented symbolically. Memory was believed to be made up of the huge collection of symbols, each one representing something, anything and everything that we encounter during our daily lives. They reasoned that we manipulate the symbols and the model when deciding what to do. Once we have proved the plan works in our heads we pass the plan to our limbs and carry it out.

Hand in hand with this belief in the importance of search mechanisms went to faith in computer power.... The belief was that as computers became more powerful it would become easier to mimic the brain. When a suitably powerful computer was coupled with a fast search programme an intelligent machine would be the result.


pg 154

INTELLIGENCE

... many of the people working in AI today are trying to solve problems that have nothing to do with intelligence or intelligent behaviour. Even if these problems are solved he (Rodney Brooks) doubts they will bring us any closer to understanding what intelligence is, how it emerges and how to emulate it. Brooks points out that during the early years of AI there were signs that people were willing to tackle real-world issues,.... what it would take to produce an intelligent robot they assumed that it would need some way to interact directly with the world.

pg 155

SITUATEDNESS

...what sets Brooks apart from the GOFAI (Good Old-fashioned AI) community is this insistence on dealing with the real world. For him this "situatedness" is key, no simulations will do. There is a real advantage to building a robot that walks or drives around the lab because this removes the need any internal model building. The robot responds to what it finds rather than tries to remember what the world looks like. The world can simply be taken for what it is because it "really is a rather good model of itself". This concept of using the world as your memory bank is used widely in the animal kingdom, ants in particular exploit the world in this way.


EMBODIMENT AUTONOMY

Hand in hand with the emphasis on situatedness and being in the world is a similar insistence on embodiment and autonomy. By this Brooks means that the robot must have a body and be able to wander around the world at will. There should be no cords to computers that do all the robots's thinking for them.


MEANING

This is important because the real world is where the meanings we share are grounded. Connecting with and acting on the world gives meaning to what a robot does. The physical world is where our mental abstractions "bottom out". Everything animals do has to take account of the facts of existences. They, and we, learn from the physical world what it means to be hot, hungry or frustrated.

A brick in the path of a robot finding its way across a room cannot be wished away, it demands attention. Before humans ever enter a school building they will have spent years being educated by the world. In exactly the same way, when robot learns to cope with stubborn objects, the interaction gives meaning to what it does. Without an ongoing participation and perception of the world there is no meaning for an agent. Everything is random symbols.

pg 158

SHORT TERM MEMORY

... once a robot had been given low-level abilities, Brooks started working on other layers of behaviour. In stark contrast to the GOFAI approach, where the outputs of one level of control become the inputs to another, Brooks isolated every level. There was no interaction between them. Once something was proved to work it was left alone and never tinkered with again. Brooks had a good reason for doing this, the same thing happens in evolution. Once an animal has acquired an adaption it is rarely discarded.

This is not to say that the outputs from separate behaviour is never conflict, they do. The outputs from all the sensors and higher-level behaviours are brought together into a common short-term memory store. As the results of one sensing processing loop arrives it overwrites any data that was that already. Once the robot has finished doing what was taking up its attention it moves on to the next instructions it finds in this memory store. Computationally the robot is behaving like a finite state machine. As it carries out different behaviours it is kicked into a separate state. The rules combine to produce novelty, innovation and surprise, just like in the cellular automaton, and real life.

... one of the key ideas in ALife is the recognition that the essence of any machine or computational system can be abstracted. What a machine does can be isolated from the physical device and the way it does it, but nothing will be lost in the process. This is the key point to take away from Turing's work. Consequently these abstract specifications can be transferred to any number of other formaly equivalent systems.

If we accept that living things are collections of physical finite state machines then we have to grant that there are properties of them that can be abstracted and captured in other systems, such as robots. As Chris Langton puts it: "the principal assumption made in Artificial Life is to the "logical form" of an organism can be separated from its material basis of construction, and that "aliveness" will be found to be a property of the former, not the latter".


HOME      BOE     SAL     TEXTE