Francisco Varela
Ethical Know-How
Action,Wisdom, and Cognition
Stanford University Press 1999

pg 52-60
Keywords: cognitive self - the identity of the cognitive self : emergence through a distributed process - emergent properties of an interneural network - the beehive and the ants' nest have long been considered "superorganisms," but this was little more than a metaphor until recently. It was not until the I970s that detailed experiments were made whose results could not be explained without taking into account the entire colony - model of how complex systems exhibit emergent properties through the coordinated activity of simple elements - representation of the external world - artificial neural network machines underlying the regularities we call their behavior or performance are interactions between ensembles - A situated cognitive entity has - by definition - a perspective - When the synthesis of intelligent behavior is approached in such an incremental manner, with strict adherence to the sensorimotor viability of an agent, the notion that the world is a source of information to be represented simply disappears.


The nature of the identity of the cognitive self just discussed is one of emergence through a distributed process. 

The emergent properties of an interneural network are enormously rich and merit further discussion at this point. What I wish to underscore here is the relatively recent (and stunning!) conclusion that lots of simple agents having simple properties may be brought together, even in a haphazard way, to give rise to what appears to an observer as a purposeful and integrated whole, without the need for central supervision. We have already touched on this theme when discussing the constant arising and subsiding of neuronal ensembles underlying behavior

I wish at this point to address this issue more generally. I base my conclusions on contemporary studies of various complex systems inspired by biological examples.

One of the most compelling of these examples is the social insect colony. The beehive and the ants' nest have long been considered "superorganisms," but this was little more than a metaphor until recently. It was not until the I970s that detailed experiments were made whose results could not be explained without taking into account the entire colony

In one particularly elegant experiment, the most efficient nurses in a Neoponera apicalis colony were removed to form a subcolony. These nurses radically changed social status, foraging more and nursing less. The contrary hap-pened in the main colony: formerly low-level nurses increased their nursing activity. The whole colony, however, showed evidence of both configurational identity and mem-ory. When the efficient nurses were returned to the main colony, they resomed their previous status.'

What is particularly striking about the insect colony is that we readily admit that its separate components are indi-viduals and that it has no center or localized "self." Yet the whole does behave as a unit and as if there were a coordinating agent present at its center. This corresponds exactly to what I mean by a selfless (or virtual) self: a coherent global pattern that emerges from the activity of simple local components, which seems to be centrally located, but is nowhere to be found, and yet is essential as a level of interaction for the behavior of the whole.

The import of this model of how complex systems exhibit emergent properties through the coordinated activity of simple elements is, in my eyes, quite profound for our understanding of cognitive properties. It introduces an explicit alternative to the dominant computationalist tradition, which postulates that sensory inputs are successively elaborated to reconstitute a centralized and internal representation of the external world.

Applied to the brain, this new model explains why we find networks and subnetworks interacting promiscoously without any real hierarchy of the sort typical of computer algorithms. To put this differently, in the brain there is no principled distinction between software and hardware or, more precisely, between symbols and nonsymbols

I raise this point to help the reader break the hold that computationalism has had on our discourse in the area for so many years and resist the consequent tendency to conceptualize the cognitive self as some computer program or high-level computational description, for it is not that sort of thing at all. 

The cognitive self is its own implementation: its history and its artion are of one piece. In fact, all we find in modern artificial neural network machines underlying the regularities we call their behavior or performance are interactions between ensembles. We may see that some of these ensembles recur regularly enough to describe them as being program-like, but this is another matter. Although artificially constructed, such emerging ensembles are not "computations" in the sense that their dynamics are formally specifiable as implementa-tions of high-level algorithms. 

Neural networks even in their fine detail are not like a machine language, since there is simply no transition from an elemental operational level with a semantics and a higher, emergent level where behavior occurs. If there were, the classical computer wisdom would immediately apply: we could ignore the hardware since it adds nothing of significance to the actual computation (other than constraints of time and space). 

In contrast, in distributed, network models these "details" are precisely what makes a global effect possible, and why they mark a sharp break with tradition in AI. Naturally, this reinforces the parallel conclusions that apply to natural neural net-works in the brain, as we discussed before.

Now this demands that we clarify the second aspect of the self to be addressed: its mode of relation with the environment. 

Ordinary life is necessarily one of situated agents, continually coming up with what to do faced with on-going parallel activities in their various perceptuo-motor systems. This continual redefinition of what to do is not at all like a plan selected from a repertoire of potential alternatives; it is enormously dependent on contingency and improvisation, and is more flexible than any plan can be. 

A situated cognitive entity has - by definition - a perspective

This means that it isn't related to its environment "objectively," independently of the system's location, heading, attitudes, and history. Instead, it relates to it in relation to the perspective established by the constantly emerging properties of the agent itself and in terms of the role such running redefinition plays in the coherence of the entire system.

Here we must sharply differentiate between"environment" and "world," for the cognitive subject is "in" both, but not in the same way. On the one hand, a body interacts with its environment in a straightforward way. These interactions are of the nature of macrophysical encounters - sensory transduction, mechanical performance, and so on - nothing surprising about them. However, this coupling is possible only if the encounters are embraced from the perspective of the system itself. This embrace requires the elaboration of a surplus signification based on this perspective; it is the origin of the cognitive agent's world. 

Whatever is encountered in the environment must be valued or not and interacted with or not. This basic assessment of surplus signification cannot be divorced from the way in which the coupling event en-counters a functioning perceptuo-motor unit; indeed, such encounters give rise to intentions (I am tempted to say"de-sires"), and intentions are unique to living cognition.'

To put this another way, the nature of the environment for a cognitive self acquires a curious status: it is that which lends itself to a surplus of signification. Like a jam session, the environment inspires the neural "music" of the cogni-tive system. Indeed, the cognitive system cannot live without this constant coupling with and the constantly emerg-ing regularities provided by its environment; without the possibility of coupled activity the system would become a mere solipsistic ghost.

For instance, light and reflectance (among many other macrophysical parameters such as edges and textures, but let us simplify for the argument's sake), lend themselves to a wide variety of color spaces, depending on the nervous sys-tem involved in that encounter. During their respective evolutionary paths, fishes, birds, mammals, and insects have brought forth various differently color spaces, not only with quite distinct behavioral significance, but with different dimensionalities. Thus differences in color vision from one animal to the next are not a matter of a greater or lesser ability to resolve colors. 

Color is demonstrably not a property that is to be "recovered" from environmental "inputs" in some unique way. Color is a dimension that shows up only in the phylogenetic dialogue between an environment and the history of an active autonomous self that partly defines what connts as an environment. Light and reflectances provide a mode of coupling, a perturbation that triggers, that provides an occasion for, the enormous informative ca-pacity of neural networks to constitute sensorimotor corre-lations and hence put into action their capacity for imagin-ing and presenting. 

It is only after all this has happened, after a mode of coupling becomes regular and repetitive, like colors in our - and others' - worlds, that we observers, for ease of language, say that color corresponds to or represents an aspect of the world.

A dramatic and recent example of this surplus signification and the dazzling performance of the brain as a generator of neural "narratives" is provided by the technology of the so-called virtual realities. 

A helmet fitted with cameras over the eyes and a glove or suit with electrical transducers for motions are linked, not through the usual coupling with the environment, but through a computer. Thus each movement of the hand or body corresponds to images according to principles entirely under the control of the programmer.

For example, each time my hand, which appears as a "virtual" iconic hand in my image, points to a place, the image that follows simulates flying to the place pointed at. Visual perception and motions thus give rise to regolarities that are proper to this new manner of perceptuo-motor coupling.

What is most sigrnficant for me here is how quickly this "virtual" world comes to seem real: we inhabit a body within this new world after about fifteen minutes or so "under the headset." And as far as this world is concerned, the experience of flying through walls or diving into fractal galaxies seems perfectly "real."

This "reality shift" occurs despite the poor quality of the image, the low sensitivity of the sensors, and the limited bandwidth of the interface between sensory and image surfaces available through a program running on a personal computer. The nervous system is such a gifted synthesizer of regularities that any basic mate-rial suff~ces as an environment to bring forth a compelling world.

Even the very pragmatically oriented field of artificial intelligence is beginning to study the situatedness of agent endowed with progressively richer internal self-organizing modules. 

When the synthesis of intelligent behavior is approached in such an incremental manner, with strict adherence to the sensorimotor viability of an agent, the notion that the world is a source of information to be represented simply disappears. 

The autonomy of the cognitive self comes fully in focus. Thus in Rodney Brooks's proposal for a new robotics (or, as he says, for a nouvelle Al) his minimal creatures join together in various activities throngh a rule of cohabitation between them. This engi-neering strategy is homologous to an evolutionary pathway through which modular subnetworks intertwine with one another in the brain. This new approach to artificial intelligence should result in the creation of devices that are more truly intelligent, autonomous, and sense-giving tban the brittle information processors that depend on a pre-given environment or an optimal plan that have been constructed to date.

It is interesting to note that in this paper Brooks also traces the origin of what he describes as the "deception of AI" to the tendency in AI (and in the rest of cognitive science as well) to abstraction, especially for the purpose of factoring out situated perceptual and motor skills. As I have argued here (and as Brooks argues for his own rea-sons), such abstraction misses the essence of cognitive intelligence, which resides only in its embodiment. 

It is as if one could separate cognitive problems into two types: those which can be solved through abstraction and those which cannot. 

Those of the second type typically involve perceptual and motor skills of agents in unspecified environments. When cognitive intelligence is approached from this self-situated perspective, it quickly becomes obvious that there is no place where perception could deliver a representation of the world in the traditional sense. The world shows up through the enactment of the perceptuo-motor regalarities. As Brooks puts it:

Just as there is no central representation there is no central system. Each activity layer connects perception to action di-rectly. It is only the observer of the creature who imputes a central representation or central control. The creature it-self has none: it is a collection of competing behaviors. Out of the local chaos of their interactions there emerges, in the eye of the observer, a coherent pattern of behavior.

Francisco Varela: Ethical Knowhow - page 60


HOME