Human beings are innate rnind readers. Our skill at imagining other people's mental states ranks up there with our knack for language and our opposable thumbs. It comes so naturally to us and has engendered so many corollary effects that it's hard for us to think of it as a special skill at all. And yet most animals lack the mind-reading skills of a four-year-old child. We come into the world with a genetic aptitude for building "theories of other minds, and adjusting those theories on the fly, in response to various forms of social feedback.
In the mideighties, the UK psychologists Simon Baron-Cohen, Alan Leslie, and Uta Frith conducted a landmark ´periment to test the mind-reading skills of young children. They concealed a set of pencils within a box of Smarties, the British candy. They asked a series of four-year-olds to open the box and make the unhappy discovery of the pencils within. The researchers then closod the box up and ushered a grown-up into the room. The children were then asked what the grown-up was expecting to find within the Smarties boxnot what they would find, mind you, but what they were expecting to find.
Our closest evolutionary cousins, the chimpanzees, share our aptitude for mind reading. The Dutch primatologist Frans de Waal tells a story of calculating sexual intrigue in his engaging, novel-like study, Chimpanzee Politics. A young, low-ranking male (named, appropriately enough, Dandy) decides to make a play for one of the females in the group. Being a chimpanzee, he opts for the usual chimpanzee method of expressing sexual attraction, which is to sit with your legs apart within eyeshot of your "object de désir" and reveal your erection. (Try that approach in human society, of course, and you'll usually end up with a restraining order.) During this particular frisky display, Luit, one of the high-ranking males, happens upon the "courtship" scene. Dandy deftly uses his hands to conceal his erection so that Luit can't see it, but the female chimp can. It's the chimp equivalent of the adulterer ssying, "This is just our little secret, right?"
De Waal's storyone of many comparable instances of primate intrigueshowcases our close cousins' ability to model the mental states of other chimps. As in the Smarties study, Dandy is performing a complicated social calculus in his concealment strategy: he wants the female chimp to knovv that he's enamored of her, but wants to hide that information from Luit. That kind of thinking seems natural to us (because it is!), but to think like that you have to be capable of modeling the contents of other primate minds. If Dandy could speak, his summary of the situation might read something like this: she knows what I'm thinking; he doesn't know what I'm thinking; she knows that I don't want him to know what I'm thinking. In that crude act of concealment, Dandy demonstrates that he possesses a gift for social imagination missingin 99.99 percent of the world's living creatures. To make that gesture, he must somewhere be aware that the world is full of imperfectly shared information, and that other individuals may have a perspective on the world that differs from his. Most important (and most conniving), he's capable of exploiting that difference for his own benefit. That exploitationa furtive pass concealed from the alpha male is only possible because he is capable of building theories of other minds.
Is it conceivable that this skill simply derives from a general increase in intelligence? Could it be that humans and their close cousins are just smarter than all those other species who flunk the mind-reading test? In other words, is there something specific to our social intelligence, something akin to a module hardwired into the brain's CPUor is the theory of minds just an idea that inevitably occurs to animals who reach a certain threshold of general intelligence?
Rizzollati called these unusnal cells "mirror neurons", and since his announcement of the discovery, the neuroscience community has been abuzz with speculation about the significance of the "monkey see, monkey do" phenomenon. It's conceivable that mirror neurons exist for more subtle, introspective mental statessuch as desire or rage or tediumand that those neurons fire when we detect signs of those states in others. That synchronization may well be the neurological root of mind reading, which would mean that our skills were more than just an offshoot of general intelligence, but relied instead on our brains being wired a specific way. We know already that specific regions are devoted to visual processing, speech, and other cognitive skills. Rizzollati's discovery suggests that we may also have a module for mind reading.
The modular theory is also supported by evidence of what happens when that wiring is damaged. Many neuroscientists now believe that autistics suffer from a specific neurological disorder that inhibits their ability to build theories of other mindsa notion that will instantly ring true for anyone who has experienced the strange emotional distance, the radical introversion, that one finds in interacting with an autistic person. Autism, the argument goes, stems from an inability to project outside one's own head and imagine the mental llfe of others. And yet autistics regularly fare well on many tests of general intelligence and often display exceptional talents at math and pattern recognition. Their disorder is not a disorder of lowered intellect. Rather, autistics lack a particular skill, the way others lack the faculty of sight or hearing. They are mind blind.
This is a legitimate question, and like almost any important question that has to do with human consciousness, the jury is still out on it. (To put it bluntly, the jury hasn't even been convened yet.) But some recent research suggests that the question has it exactly backwardat least as far as the evolution of the brain goes. We're conscious of our own thoughts, the argument suggests, only because we first evolved the capacity to imagine the thoughts of others. A mind that can't imagine external mental states is like that of a three-year-old who projects his or her own knowledge onto everyone in the room: it's all pencils, no Smarties. But as philosophers have long noted, to be self-aware means recognizing the limits of selfhood. You can't stop back and reflect on your own thoughts without recognizing that your thoughts are finite, and that other combinations of thoughts are possible. We know both that the pencils are in the box, and that newcomers will still expect Smarties. Without those limits, we'd certainly be aware of the world in some basic senseit's just that we wouldn't be aware of ourselves, because there'd be nothing to compare ourselves to. The self and the world would be indistinguishable.
The notion of being aware of the world and yet not somehow self-aware seems like a logical impossibility. It feels as if our own selfhood would scream out at us after a while, "Hey, look at me! Forget about those SmartiesI'm thinking here! Pay attention to me!" But without any recognition of other thoughts to measure our own thoughts against, our own mental state wouldn't even register as something to think about. It may well be that self-awareness only jumps out to us because we're naturally inclined to project into the minds of others. But in a mind incapable of imagining the contents of other minds, that self-reflection wouldn't be missed. It would be like being rzised on a planet without satellites, and missing the moon.
We all have a region of the retina where the optic nerve connects the visual cortex to the back of the retina. No rods or cones are within this area, so the corresponding area of our visual field is incapable of registering light. While this blind spot has a surprisingly large diameter (about six degrees across), its effects are minimal because of our stereo vision: the blind spots in esch eye don't overlap, and so information from one eye fills in the information lacking in the other. But you can detect the existence of the blind spot by closing one eye and focusing the other on a specific word in this sentence. Place your index finger over the word, and then slowly move your finger to the right, while keeping your gaze locked on the word. After a few inches, you'll notice that the tip of your finger fades from view. It's an uncanny feeling, but what's even more uncanny is that your visual field suffers from this strange disappearing act anytime you close one eye. And yet you don't notice the absence at allthere's no sense of information being lost, no dark patch, no blurriness. You have to do an elaborate trick with your finger to notice that something's missing. It's not the lack of visual information that should startle us; it's that we have such a hard time noticing the lack.
The blind spot doesn't jump out at us because the brain isn't expecting information from that zone, and there's no other signal struggling to fill in the blanks for us, or pointing out that there is a blank in the first place. As the philosopher Daniel Dennett describes it, there are no centers of the visual cortex responsible for receiving reports from this area, so when no reports arrive, there is no one to complain. An absence of information is not the same as information about an absence." We're blind to our blindness.
Perhaps the same goes with the theory of other minds. Without that awareness of other mental states reminding us of our own limtations, we might well be aware of the world, yet unaware of our own mental life. The lack of self-awareness wouldn't jump out at us for the same reason that the blind spot remains invisible: there's no feedback mechanism to sound the alarm that something's missing. Only when we begin to speculate on the mental life of others do we discover that we have a mental life ourselves.
If self-awareness is a by-product of our mind-reading skills, what propelled us to start building those theories of other minds in the first place? That answer comes more easily. The battle of natureversus-nurture may have many skirmishes to come, but by now only the most blinkered anti-essentialist disagrees with the premise that we are social animals by nature. The great preponderance of human populations worldwideboth modern and "primitive"live in extended bands and form complex social systems. Among the apes, we are an anomaly in this respect: only the chimps share our compulsive mixed-sex sacializing. (Orangutans live mostly solitary lives; gibbons as isolated couples; gorillas travel in harems dominated by a single male.) That social complexity demands formidable mental skills: instead of outfoxing a single predator, or caring for a single infant, humans mentally track the behavior of dozens of individuals, altering their own behavior based on that information. Some evolutionary psychologists believe that the extraordinary expansion of brain size between Homo habilis and Homo sapiens (brain mass trebled over the 2-million-year period that separates the two species) was at least in part triggered by an arms race between Pleistocene-era extroverts. If successfully passing on your genes to another generation depended on a nuanced social intelligence that competed with other social intellects for reproductive privileges, then it's not hard to imagine natural selection generating a Machiavellian mental toolbox in a surprisingly short period.
Mirror neurons and mind reading have an immense amount to teach us about our talents and limitations as a species, and there's no doubt we'll be untangling the "theory of other minds" for years to come. Whatever the underlying mechanism turns out to be, the faculty of mind reading - and its close relation, self-awareness - is clearly an emergent property of the brain's neural networks. We don't know precisely how that higher-level behavior comes into being, but we do know that it is conjured up by the local, feedbackheavy interactions of unwining agents, by the complex adaptive system that we call the human mind. No individual neuron is sentient, and yet somehow the union of billions of neurons creates selfawareness. It may turn out that the brain gets to that self-awareness by first predicting the behavior of neurons residing in other brainsthe way, for instance, our brains are hardwired to predict the behavior of light particles and sound waves. But whichever one came first - the extroverted chicken or the self-aware egg - those faculties are prime examples of emergence at work. You wouldn't be able to read these words, or speculate about the inner workings of your mind, were it not for the protean force of emergence.
But there are limits to that force, and to its handiwork. Natural selection endowed us with cognitive tools uniquely equipped to handle the social complexity of Stone Age groups on the savannas of Africa, but once the agricultural revolution introduced the first cities along the banks of the Tigris-Euphrates valley, the Homo sapiens mind naturally recoiled from the sheer scale of those populations. A mind designed to handle the maneuverings of less than two hundred individuals suddenly found itself immersed in a community of ten or twenty thousand. To solve that problem, we once again leaned on the powers of emergence, although the solution resided one level up from the individual human brain: instead of looking to swarms of neurons to deal with social complexity, we looked to swarms of individual humans. Instead of reverberating neuronal circults, neighborhoods emerged out of traffic patterns. By following the footprints, and learning from their behavior, we built another ceiling on top of the one imposed on us by our frontal lobes. Managing complexity became a problem to be solved on the level of the city itself.
Over the last decade we have run up against another ceiling. We are now connected to hundreds of millions of people via the vast labyrinth of the World Wide Web. A community of that scale requires a new solution, one beyond our brains or our sidewalks, but once again we look to self-organization for the tools, this time built out of the instruction sets of software: Alexa, Slashdot, Epinions, Everything2, Freenet.
From a certain angle, this is an old story. The great software revolution of the seventies and eighties - the invention of the graphic interface - was itself predicated on a theory of other minds. The design principles behind the graphic interface were based on predictions about the general faculties of the human perceptual and cognitive systems. Our spatial memory, for instance, is more powerful than our textual memory, so graphic interfaces emphasize icons over commands. We have a natural gift for associative thinking, thanks to the formidable pattern-matching skills of the brain's distributed network, so the graphic interface borrowed visual metaphors from the real world: desktops, folders, trash cans. Just as certain drugs are designed specifically as keys to unlock the neurochemistry of our gray matter, the graphic interface was designed to exploit the innate talents of the human mind and to rely as little as possible on our shortcomings. If the ants had been the first species to invent personal computers, they would have no doubt built pheromone interfaces, but because we inherited the exceptional visual skills of the primate family, we have adopted spatial metaphors on our computer screens.
To be sure, the graphic interface's mind-reading talents are ruthlessly generic. Scrolling windows and desktop metaphors are based on predictions about a human mind, not your mind. They're onesize-fits-all theories, and they lack any real feedback mechanism to grow more familiar with your particular aptitudes. What's more, their predictions are decidedly the product of top-down engineering. The software didn't learn on its own that we're a visual species; researchers at Xerox-PARC and MIT already knew about our visual memory, and they used that knowledge to create the first generation of spatial metaphors. But these limitations will soon go the way of vacuum tubes and punch cards. Our software will develop nuanced and evolving models of our individual mental states, and that learning will emerge out of a bottom-up system. And while this software will deliver information tailored to our interests and appetites, its mind-reading skills will be far less insular than today's critics would have us believe. You may read something like the "Daily Me" in the near future, but that digital newspaper will be compiled by tracking the interests and reading habits of millions of other humans. Interacting with emergent software is already more like growing a garden than driving a car or reading a book. In the near future, though, you'll be working alongside a million other gardeners. We will have more powerful personalization tools than we ever thought possible - but those tools will be created by massive groups scattered all across the world. When Patti Maes first began developing recommendation software at MIT in the early nineties, she called it collaborative filtering. The term has only grown more resonant. In the next few years, we will have personalized filters beyond our wildest dreams. But we will also be collaborating on a scale rivaled only by the cities we first started building six thousand years ago.
Those collaborations will build more than just music-recommendation tools and personalized newspapers. Our new ability to capture the power of emergence in code will be closer to the revolution unleashed when we figured out how to distribute electricity a century ago. Almost every region of our cultural life was transformed by the power grid; the power of self-organizationcoupled with the connective technology of the Internetwill usher in a rovolution every bit as significant. Applied emergence will go far beyond simply building more user-friendly applications. It will transform our very definition of a media experience and challenge many of our habitual assumptions about the separation between public and private life. A few decades from now, the forces unleashed by the bottom-up revolution may well dictate that we redefine intelligence itself, as computers begin to convincingly simulate the human capacity for open-ended learning. But in the next five years alone, we'll have plenty of changes to keep us busy. Our computers and television sets and refrigerators won't be thinking themselves, but they'll have a pretty good idea what we're thinking about.