James Kennedy

Russell C.Eberhardt

SWARM INTELLIGENCE

Academic Press 2001
xiii

Preface
Homo sapiens - literally, "intelligent man" - has adapted to nearly every environment on the face of the earth, below it, and as far above it as we can propel ourselves. We must be doing something right.

In this book we argue that what we do right is related to our sociality. We will investigate that elusive quality known as intelligence, which is considered first of all a trait of humans and second as something that might be created in a computer, and our conclusion will be that whatever this "intelligence" is, it arises from interactions among individuals.

We humans are the most social animals: we live together in families, tribes, cities, nations, behaving and thinking according to the rules and norms of our communities, adopting the customs of our fellows, including the facts they believe and the explanations they use to tie those facts together. Even when we are alone, we think about other people, and even when we think about inanimate things, we think using language – the medium of interpersonal communication.

Almost as soon as the electronic computer was invented, philosophers and scientists began to ask questions about the similarities between computer programs and minds. Computers can process symbolic information, can derive conclusions from premises, can store information and recall it when it is appropriate, and so on - all things that minds do. If minds can be intelligent, those thinkers reasoned, there was no reason that computers could not be. And thus was born the great experiment of artificial intelligence.

The early AI researchers made an important assumption, so fundamental that it was never stated explicitly nor consciously acknowledged. They assumed that cognition is something inside an individual's head. An AI programme was modelled on the vision of a single disconnected person, processing information inside his or her brain, turning the problem this way or that, rationally and coolly. Indeed, this is the way we experience our own thinking, as if we hear private voices and see private visions.


But this experience can lead us to overlook what should be our most noticeable quality as a species: our tendency to associate with one another, to socialise. If you want to model human intelligence, we argue here, then you should do it by modelling individuals in a social context, interacting with one another.

We do not mean the kinds of interaction typically seen in multiagent systems where autonomous subroutines perform specialised functions. Agents subroutines may pass information back and forth, but subroutines are not changed as a result of the interaction, as people are.

In the real social interaction, information is exchanged, but also something else, perhaps more important: individuals exchange rules, tips, and beliefs about how to process the information. Thus a social interaction typically results in a change in the thinking process - not just the contents - of the participants.

Social behaviour helps individual species members adapt to their environment, especially by providing individuals with more information than their own senses can gather. You sniff the air and detect the scent of a predator; I, seeing you tense in anticipation, tense also, and grow suspicious. There are numerous other advantages as well that give social animals a survival advantage, to make social behaviour the norm throughout the animal kingdom.

We argue here against the view, widely held in cognitive science, of the individual as an isolated information processing entity. We wish to write computer programs that simulate societies of individuals, each working on a problem and that the same time perceiving the problem solving endeavours of its neighbours, and being influenced by those neighbours' successes. What would such programs looks like?

We explore ideas about intelligence arising in social contexts. Sometimes we talk about people and other living - carbon based - organisms, and at other times to talk about silicon based entities, existing in computer programs. To us, a mind is a mind, whether embodied in protoplasm or semiconductors, and intelligence's intelligence. The important thing is that minds arise from interaction with other minds.

xxvi

MIND

Mind is a term we use in the ordinary sense, which is of course not very well defined. Generally, mind is "that which thinks". The colloquial use of the concept of mind contains two aspects, phenomenological and psychological.

The phenomenological aspect of mind has to do with the conscious experience of thinking, what it is like to think, while the psychological aspect has to do with the function of thinking, the information processing that results in observable behaviour.

The connection between conscious experience and cognitive function is neither simple nor obvious.

Because consciousness is not observable, falsifiable, or provable, and we're talking in this book about computer programs that simulate human behaviour, we mostly ignore the phenomenology of mind, except when it is relevant in explaining function. Sometimes the experience of being human makes it harder to perceive functional cognition objectively, and we feel responsible to note where first person subjectivity steers the folk psychologist away from a scientific view.

 

HOME      BOE     SAL     TEXTE