Steven Johnson
EMERGENCE
The Connected Lives of Ants,Brains,Cities, and Software
Scribner 2004


pg 195
The Mind Readers

What are you thinking about right now? Because my words are being communicated to you via the one-way medium of the printed page, this is a difficult question for me to answer. But if I were presenting this argument while sitting across a table from you, I'd already have an answer, or at least an educated guess—even if you'd been silent the entire time. Your facial gestures, eye movements, body language, would all be sending a steady stream of information about your internal state—signals that I would intuitively pick up and interpret. I'd see your eyelids droop during the more contorted arguments, note the chuckle at one of my attempts at humor, register the way you sit upright in the chair when my words get your attention. I could no more prohibit my mind from making those assessments than you could stop your mind from interpreting my spoken words as language. (Assuming you're an English speaker, of course.) We are both locked in a communicational dance of extraordinary depth—and yet, amazingly, we're barely aware of the process at all.

Mind Readers - Body Language - Mental states

Human beings are innate rnind readers. Our skill at imagining other people's mental states ranks up there with our knack for language and our opposable thumbs. It comes so naturally to us and has engendered so many corollary effects that it's hard for us to think of it as a special skill at all. And yet most animals lack the mind-reading skills of a four-year-old child. We come into the world with a genetic aptitude for building "theories of other minds, and adjusting those theories on the fly, in response to various forms of social feedback.

In the mideighties, the UK psychologists Simon Baron-Cohen, Alan Leslie, and Uta Frith conducted a landmark ´periment to test the mind-reading skills of young children. They concealed a set of pencils within a box of Smarties, the British candy. They asked a series of four-year-olds to open the box and make the unhappy discovery of the pencils within. The researchers then closod the box up and ushered a grown-up into the room. The children were then asked what the grown-up was expecting to find within the Smarties box—not what they would find, mind you, but what they were expecting to find.
Across the board, the four-year-olds gave the right answer: the clueless grown-up was expecting to find Smarties, not pencils. The children were able to separate their own knowledge about the contents of the Smarties box from the knowledge of another person. They grasped the distinction between the external world as they perceived it, and the world as perceived by others.
The psychologists then conducted the same experiment with three­year-olds, and the exact opposite result came back. The children consistently assumed that the grown-up would expect to find pencils in the box, not candy. They had not yet developed the faculty for building models of other people's mental states—they were trapped in a kind of infantile omniscience, where the knowledge you possess is shared by the entire world. The idea of two radically distinct mental states, each containing different information about the world, exceeded the faculties of the three-year-old mind, but it came naturally to the four-year-olds.

Our closest evolutionary cousins, the chimpanzees, share our aptitude for mind reading. The Dutch primatologist Frans de Waal tells a story of calculating sexual intrigue in his engaging, novel-like study, Chimpanzee Politics. A young, low-ranking male (named, appropriately enough, Dandy) decides to make a play for one of the females in the group. Being a chimpanzee, he opts for the usual chimpanzee method of expressing sexual attraction, which is to sit with your legs apart within eyeshot of your "object de désir" and reveal your erection. (Try that approach in human society, of course, and you'll usually end up with a restraining order.) During this particular frisky display, Luit, one of the high-ranking males, happens upon the "courtship" scene. Dandy deftly uses his hands to conceal his erection so that Luit can't see it, but the female chimp can. It's the chimp equivalent of the adulterer ssying, "This is just our little secret, right?"

De Waal's story—one of many comparable instances of primate intrigue—showcases our close cousins' ability to model the mental states of other chimps. As in the Smarties study, Dandy is performing a complicated social calculus in his concealment strategy: he wants the female chimp to knovv that he's enamored of her, but wants to hide that information from Luit. That kind of thinking seems natural to us (because it is!), but to think like that you have to be capable of modeling the contents of other primate minds. If Dandy could speak, his summary of the situation might read something like this: she knows what I'm thinking; he doesn't know what I'm thinking; she knows that I don't want him to know what I'm thinking. In that crude act of concealment, Dandy demonstrates that he possesses a gift for social imagination missingin 99.99 percent of the world's living creatures. To make that gesture, he must somewhere be aware that the world is full of imperfectly shared information, and that other individuals may have a perspective on the world that differs from his. Most important (and most conniving), he's capable of exploiting that difference for his own benefit. That exploitation—a furtive pass concealed from the alpha male— is only possible because he is capable of building theories of other minds.

Is it conceivable that this skill simply derives from a general increase in intelligence? Could it be that humans and their close cousins are just smarter than all those other species who flunk the mind-reading test? In other words, is there something specific to our social intelligence, something akin to a module hardwired into the brain's CPU—or is the theory of minds just an idea that inevitably occurs to animals who reach a certain threshold of general intelligence?

We are only now beginning to build useful maps of the brain's functional topography, but already we see signs that "mind reading" is more than just a by-product of general intelligence. Several years ago, the Italian neuroscientist Giaccamo Rizzollati discovered a region of the brain that may well prove to be integral to the theory of other minds. Rizzollati was studying a section of the ventral premotor area of the monkey brain, a region of the frontal lobe usually associated with muscular control. Certain neurons in this field fired when the monkey performed specific activities, like reaching for an object or putting food in its mouth. Different neurons would fire in response to different activities. At first, this level of coordination suggested that these neurons were commanding the appropriate muscles to perform certain tasks. But then Rizzollati noticed a bizarre phenomenon. The same neurons would fire when the monkey observed another monkey performing the task. The pound-your-fist-on-the-floor neurons would fire every time the monkey saw his cellmate pounding his fist on the floor.

Rizzollati called these unusnal cells "mirror neurons", and since his announcement of the discovery, the neuroscience community has been abuzz with speculation about the significance of the "monkey see, monkey do" phenomenon. It's conceivable that mirror neurons exist for more subtle, introspective mental states—such as desire or rage or tedium—and that those neurons fire when we detect signs of those states in others. That synchronization may well be the neurological root of mind reading, which would mean that our skills were more than just an offshoot of general intelligence, but relied instead on our brains being wired a specific way. We know already that specific regions are devoted to visual pro­cessing, speech, and other cognitive skills. Rizzollati's discovery suggests that we may also have a module for mind reading.

The modular theory is also supported by evidence of what happens when that wiring is damaged. Many neuroscientists now believe that autistics suffer from a specific neurological disorder that inhibits their ability to build theories of other minds—a notion that will instantly ring true for anyone who has experienced the strange emotional distance, the radical introversion, that one finds in interacting with an autistic person. Autism, the argument goes, stems from an inability to project outside one's own head and imag­ine the mental llfe of others. And yet autistics regularly fare well on many tests of general intelligence and often display exceptional talents at math and pattern recognition. Their disorder is not a disorder of lowered intellect. Rather, autistics lack a particular skill, the way others lack the faculty of sight or hearing. They are mind blind.

Still, it can be hard to appreciate how rare a gift our mind reading traly is. For most of us, that we are aware of other minds seems at first blush like a relatively simple achievement—certainly not something you'd need a special cognitive tool for. I know what it's like inside my head, after all—it's only logical that I should imagine what's inside someone else's. If we're already self-aware, how big a leap is it to start keeping track of other selves?

This is a legitimate question, and like almost any important question that has to do with human consciousness, the jury is still out on it. (To put it bluntly, the jury hasn't even been convened yet.) But some recent research suggests that the question has it exactly backward—at least as far as the evolution of the brain goes. We're conscious of our own thoughts, the argument suggests, only because we first evolved the capacity to imagine the thoughts of others. A mind that can't imagine external mental states is like that of a three-year-old who projects his or her own knowledge onto everyone in the room: it's all pencils, no Smarties. But as philosophers have long noted, to be self-aware means recognizing the limits of selfhood. You can't stop back and reflect on your own thoughts without recognizing that your thoughts are finite, and that other combinations of thoughts are possible. We know both that the pencils are in the box, and that newcomers will still expect Smarties. Without those limits, we'd certainly be aware of the world in some basic sense—it's just that we wouldn't be aware of ourselves, because there'd be nothing to compare ourselves to. The self and the world would be indistinguishable.

The notion of being aware of the world and yet not somehow self-aware seems like a logical impossibility. It feels as if our own selfhood would scream out at us after a while, "Hey, look at me! Forget about those Smarties—I'm thinking here! Pay attention to me!" But without any recognition of other thoughts to measure our own thoughts against, our own mental state wouldn't even register as something to think about. It may well be that self-awareness only jumps out to us because we're naturally inclined to project into the minds of others. But in a mind incapable of imagining the contents of other minds, that self-reflection wouldn't be missed. It would be like being rzised on a planet without satellites, and missing the moon.

We all have a region of the retina where the optic nerve connects the visual cortex to the back of the retina. No rods or cones are within this area, so the corresponding area of our visual field is incapable of registering light. While this blind spot has a surprisingly large diameter (about six degrees across), its effects are minimal because of our stereo vision: the blind spots in esch eye don't overlap, and so information from one eye fills in the information lacking in the other. But you can detect the existence of the blind spot by closing one eye and focusing the other on a specific word in this sentence. Place your index finger over the word, and then slowly move your finger to the right, while keeping your gaze locked on the word. After a few inches, you'll notice that the tip of your finger fades from view. It's an uncanny feeling, but what's even more uncanny is that your visual field suffers from this strange disappearing act anytime you close one eye. And yet you don't notice the absence at all—there's no sense of information being lost, no dark patch, no blurriness. You have to do an elaborate trick with your finger to notice that something's missing. It's not the lack of visual information that should startle us; it's that we have such a hard time noticing the lack.

The blind spot doesn't jump out at us because the brain isn't expecting information from that zone, and there's no other signal struggling to fill in the blanks for us, or pointing out that there is a blank in the first place. As the philosopher Daniel Dennett describes it, there are no centers of the visual cortex responsible for receiving reports from this area, so when no reports arrive, there is no one to complain. An absence of information is not the same as information about an absence." We're blind to our blindness.

Perhaps the same goes with the theory of other minds. Without that awareness of other mental states reminding us of our own limtations, we might well be aware of the world, yet unaware of our own mental life. The lack of self-awareness wouldn't jump out at us for the same reason that the blind spot remains invisible: there's no feedback mechanism to sound the alarm that something's missing. Only when we begin to speculate on the mental life of others do we discover that we have a mental life ourselves.

If self-awareness is a by-product of our mind-reading skills, what propelled us to start building those theories of other minds in the first place? That answer comes more easily. The battle of nature­versus-nurture may have many skirmishes to come, but by now only the most blinkered anti-essentialist disagrees with the premise that we are social animals by nature. The great preponderance of human populations worldwide—both modern and "primitive"—live in extended bands and form complex social systems. Among the apes, we are an anomaly in this respect: only the chimps share our compulsive mixed-sex sacializing. (Orangutans live mostly solitary lives; gibbons as isolated couples; gorillas travel in harems dominated by a single male.) That social complexity demands formidable mental skills: instead of outfoxing a single predator, or caring for a single infant, humans mentally track the behavior of dozens of individuals, altering their own behavior based on that information. Some evolutionary psychologists believe that the extraordinary expansion of brain size between Homo habilis and Homo sapiens (brain mass trebled over the 2-million-year period that separates the two species) was at least in part triggered by an arms race between Pleistocene-era extroverts. If successfully passing on your genes to another generation depended on a nuanced social intelligence that competed with other social intellects for reproductive privileges, then it's not hard to imagine natural selection generating a Machiavellian mental tool­box in a surprisingly short period.

The group element may even explain the explosion in sheer cranial size: social complexity is a problem that scales well—build a module that can analyze one person's mind, and all you need to do is throw more resources at the problem, and you can analyze a dozen minds with the same tools. The brain didn't need to invent any complicated new routines once it figured out how to read a single mind—it just needed to devote more processing power. That power came in the form of brain mass. more neurons to model the behavior of other brains, which themselves contained more neurons, for the same reason. It's a classic case of positive feedback, only it seems to have run into a ceiling of 150 people, according to the latest anthropological studies. We have a natural gift for building theories of other minds, so long as there aren't too many of them.

Perhaps if human evolution had continued on for another million years or so, we'd all be mentally modeling the behavior of entire cities. But for whatever reason, we stopped short at 150, and that's where we remained—until the new technologies of urban living pushed our collectivities beyond the rnagic number. Those oversize communities appeared too quickly for our minds to adapt to them using the tools of natural selection, and so we hit upon another solution, one engineered by the community itself, and not by its genes. We started building neighborhoods, groups within groups. When our lived communities extended beyond the ceiling of human comprehension, we started building new floors.

Mirror neurons and mind reading have an immense amount to teach us about our talents and limitations as a species, and there's no doubt we'll be untangling the "theory of other minds" for years to come. Whatever the underlying mechanism turns out to be, the faculty of mind reading - and its close relation, self-awareness - is clearly an emergent property of the brain's neural networks. We don't know precisely how that higher-level behavior comes into being, but we do know that it is conjured up by the local, feedback­heavy interactions of unwining agents, by the complex adaptive system that we call the human mind. No individual neuron is sentient, and yet somehow the union of billions of neurons creates self­awareness. It may turn out that the brain gets to that self-awareness by first predicting the behavior of neurons residing in other brains—the way, for instance, our brains are hardwired to predict the behavior of light particles and sound waves. But whichever one came first - the extroverted chicken or the self-aware egg - those faculties are prime examples of emergence at work. You wouldn't be able to read these words, or speculate about the inner workings of your mind, were it not for the protean force of emergence.

But there are limits to that force, and to its handiwork. Natural selection endowed us with cognitive tools uniquely equipped to handle the social complexity of Stone Age groups on the savannas of Africa, but once the agricultural revolution introduced the first cities along the banks of the Tigris-Euphrates valley, the Homo sapiens mind naturally recoiled from the sheer scale of those populations. A mind designed to handle the maneuverings of less than two hundred individuals suddenly found itself immersed in a community of ten or twenty thousand. To solve that problem, we once again leaned on the powers of emergence, although the solution resided one level up from the individual human brain: instead of looking to swarms of neurons to deal with social complexity, we looked to swarms of individual humans. Instead of reverberating neuronal circults, neighborhoods emerged out of traffic patterns. By following the footprints, and learning from their behavior, we built another ceiling on top of the one imposed on us by our frontal lobes. Managing complexity became a problem to be solved on the level of the city itself.

Over the last decade we have run up against another ceiling. We are now connected to hundreds of millions of people via the vast labyrinth of the World Wide Web. A community of that scale requires a new solution, one beyond our brains or our sidewalks, but once again we look to self-organization for the tools, this time built out of the instruction sets of software: Alexa, Slashdot, Epinions, Everything2, Freenet.
Our brains first helped us navigate larger groups of fellow humans by allowing us to peer into the minds of other individuals and to recognize patterns in their behavior.
The city allowed us to see patterns of group behavior by recording and displaying those patterns in the form of neighborhoods.
Now the latest software scours the Web for patterns of online activity, using feedback and pattern-matching tools to find neighbors in an impossibly oversize population. At first glance, these three solutions - brains, cities, and software - would seem to belong to completely different orders of experience. But as we have seen over the preceding pages, they are all instances of self-organization at work, local interactions leading to global order. They exist on a contiauum of sorts. The materials change as you jump from the scale of a hundred humans to a million to 100 million. But the system remains the same.

Amazingly, this process has come full circle. Hundreds of thousands - if not millions - of years ago, our brains developed a feedback mechanism that enabled them to construct theories of other minds. Today, we are beginning to create software applications that are capable of developing a theory of our minds. All those fluid, self-organizing programs tracking our tastes and interests, and measuring them against the behavior of larger populations - these programs are the beginning of a progression that will, in a matter of years, lead to a world where we regularly interact with media that seems to know us in some fundamental way. Software will recognize our habits, anticipate our needs, adapt to our changing moods. The first generation of emergent software - programs like SimCity and StarLogo - displayed a captivatingly organic quality; they seemed more like life-forms than the sterile instruction sets and command lines of early code. The next generation will take that organic feel one step further: the new software wil1 use the tools of self-organization to build models of our own mental states. These programs won't be self-aware, and they won't pass anyTuring tests, but they will make the media experiences we've grown accustomed to seem autistic in comparison. They will be mind readers.

From a certain angle, this is an old story. The great software revolution of the seventies and eighties - the invention of the graphic interface - was itself predicated on a theory of other minds. The design principles behind the graphic interface were based on pre­dictions about the general faculties of the human perceptual and cognitive systems. Our spatial memory, for instance, is more powerful than our textual memory, so graphic interfaces emphasize icons over commands. We have a natural gift for associative thinking, thanks to the formidable pattern-matching skills of the brain's distributed network, so the graphic interface borrowed visual metaphors from the real world: desktops, folders, trash cans. Just as certain drugs are designed specifically as keys to unlock the neuro­chemistry of our gray matter, the graphic interface was designed to exploit the innate talents of the human mind and to rely as little as possible on our shortcomings. If the ants had been the first species to invent personal computers, they would have no doubt built pheromone interfaces, but because we inherited the exceptional visual skills of the primate family, we have adopted spatial metaphors on our computer screens.

To be sure, the graphic interface's mind-reading talents are ruthlessly generic. Scrolling windows and desktop metaphors are based on predictions about a human mind, not your mind. They're one­size-fits-all theories, and they lack any real feedback mechanism to grow more familiar with your particular aptitudes. What's more, their predictions are decidedly the product of top-down engineering. The software didn't learn on its own that we're a visual species; researchers at Xerox-PARC and MIT already knew about our visual memory, and they used that knowledge to create the first generation of spatial metaphors. But these limitations will soon go the way of vacuum tubes and punch cards. Our software will develop nuanced and evolving models of our individual mental states, and that learning will emerge out of a bottom-up system. And while this software will deliver information tailored to our interests and appetites, its mind-reading skills will be far less insular than today's critics would have us believe. You may read something like the "Daily Me" in the near future, but that digital newspaper will be compiled by tracking the interests and reading habits of millions of other humans. Interacting with emergent software is already more like growing a garden than driving a car or reading a book. In the near future, though, you'll be working alongside a million other gardeners. We will have more powerful personalization tools than we ever thought possible - but those tools will be created by massive groups scattered all across the world. When Patti Maes first began developing recommendation software at MIT in the early nineties, she called it collaborative filtering. The term has only grown more resonant. In the next few years, we will have personalized filters beyond our wildest dreams. But we will also be collaborating on a scale rivaled only by the cities we first started building six thousand years ago.

Those collaborations will build more than just music-recommendation tools and personalized newspapers. Our new ability to capture the power of emergence in code will be closer to the revolution unleashed when we figured out how to distribute electricity a century ago. Almost every region of our cultural life was transformed by the power grid; the power of self-organization—coupled with the connective technology of the Internet—will usher in a rovolution every bit as significant. Applied emergence will go far beyond simply building more user-friendly applications. It will transform our very definition of a media experience and challenge many of our habitual assumptions about the separation between public and private life. A few decades from now, the forces unleashed by the bottom-up revolution may well dictate that we redefine intelligence itself, as computers begin to convincingly simulate the human capacity for open-ended learning. But in the next five years alone, we'll have plenty of changes to keep us busy. Our computers and television sets and refrigerators won't be thinking themselves, but they'll have a pretty good idea what we're thinking about.


Evolution of Language
Evolution of Society




HOME | SAL | TEXTE | BOE