Carl Zimmer
Soul Made Flesh
The Discovery of the Brain - and How it changed the World
Heinemann 2004

pg 276

While the brain may be a collection of modules, none of them can work alone. During any mental task, information reverberates in a far-flung network of regions that light up like constellations on an MRI scan. These networks are constantly changing over scales of seconds, minutes, and decades. Connections grow strong or weak; old nodes disappear and new ones take over.

This flexiLility is crucial, because fueling the brain costs us dearly. The activity of a single neuron uses up so much energy that less than 1 percent of the neurons in the cortex can be active at any moment. With such a limited budget of energy, the brain simply cannot take in all the information available to its senses. It must make up for its shortcomings with elegant strategies for picking out only what matters. But what is important one second may become unimportant the next, and so the brain also needs to continually rearrange its net­works, refocusing its attention on perceptions that do the best job of predicting the future. A driver may focus his attention on a driveway because he knows that his neighbor has a habit of lorching out into traffic. His brain makes the neurons in that part of his field of vision more sensitive and boosts their signal by as much as 30 percent. Most of the time the brain refocuses itself automatically without our aware­ness. Willis's sensitive soul is at work, altering the pathways that the animal spirits travel throngh the brain.

Thomas Willis (1621-1689, anatomist and pysician) believed that these pathways were altered not through the influence of humors or the stars, but by chemicals. Today neu­roscientists are figuring out exactly what those chemicals are. One of the most important is dopamine, which is produced by a few thousand neurons buried deep in the brain stem. When an animal unexpectedly finds a reward - be it food, water, or sex - these neurons release a surge of dopamine from their thousands of branches. Suddenly many parts of the brain become focused on the reward and how the animal can find it again. With more practice, an animal begins to associate certain cues with the reward. Before long the dopamine is triggered by the cues, rather than the reward. It gives the animal an invigorating feeling of anticipation, a notion of cause and effect.

Like other animals, we humans rely on dopamine to do anything new: to learn how to walk, to shoot a basket, to get a can of iced tea from vending machines in Tokyo even when we can't read Japanese. And as we gain more expertise, dopamine gives us its feeling of exhilaration and happiness not with the reward itself but earlier, with the things we know presage the reward. A gambler shouts for joy not when he is cashing in his chips, bot at the craps table, when the dice turn up seven.

Most of the branches of dopamine-producing neurons reach into the prefrontal cortex, the outermost layer of the front third of the human brain. This region is crucial for figuring out cause and effect and using that information to reach a goal. When the prefrontal cortex is damaged, the fabric of reality begins to unravel. People with prefrontal cortex damage may still be able to add cream to their coffee and stir it with a spoon; they may just stir first and pour later.

Simple games can reveal how the prefrontal cortex brings order to our lives. One involves dealing a deck of cards decorated with diamonds and circles and other shapes. Volunteers look at the cards and match pairs of them without being told what the rule of the game is. The rule might be to match cards according to color. The scientists tell the volunteers if their match is right or wrong, and they try again. It doesn't take long for most people to figure out the rule, and brain scans show that they use their prefrontal cortex to do it. Once volunteers figure out what the rule is, the pre­frontal cortex calms down. If the scientists change the rule without telling the volunteers - perhaps now cards have to be matched by shape - the volunteers suddenly find themselves making wrong choices. Their prefrontal cortex switches on again.

Model of how the prefrontal cortex works

Earl Miller of the Massachusetts Institute of Technology and Jonathan Cohen of Princeton University have used this game and others like it to put together a particularly persuasive model of how the prefrontal cortex works. As a card-sorting game begins, information starts to flow from the senses to the prefrontal cortex. The neurons in the prefrontal cortex influence the choice a volunteer makes. Pairing cards correctly triggers a surge of dopamine, which strengthens the connections between the neurons. With each trial, more neurons in the prefrontal cortex join into the card-sorting network, turning the simple association between pairs of cards into a general rule. As this rule proves reliable, a correct guess no longer brings a jolt of dopamine. The prefrontal cortex becomes quiet, and the brain now automatically follows the newly constructed pathway from cue to response.

When the rules switch and volunteers start getting wrong answers, a special part of the brain detects the conflict. A band of neurons in the cleft of the brain's hemispheres becomes active. Known as the anterior cingulate cortex, it sends out signals to the prefrontal cortex to call it back into service again. The prefrontal neurons learn the new rules and send signals to the rest of the brain that reorganize it from its old response to a new one. The signals might downplay the importance of colors while highlighting shapes instead. By favoring certain incoming signals, the prefrontal cortex shuts out distractions that interfere with learning a new rule. Once the brain's new rules start producing good results and there is no conflict anymore, the anterior cingulate cortex quiets down again.

Rules for a game of cards can be easily learned and unlearned. But rules we follow for years aren't so easy to overcome. Americans in England often look the wrong way as they step off a curb, even if they know full well that cars in England drive on the left side of the road. A psychological experiment known as the Stroop test can reveal the nature of this mistake. People are asked to look at a word printed in colored letters and name the color they see. The word "red" in green letters slows down people's response enormously, because the two competing responses vie to be our choice. Brain scans show that a Stroop test summons the anterior cingulate cortex into action. As it senses a conflict between responses, it recruits the prefrontal cortex. Boosting the weaker response of naming the color over the habitual response of naming the word, the prefrontal cortex lets a person get the answer right. The same thing happens when we struggle with a name on the tip of our tongue, as a swarm of similar names competes to emerge from our memory. The same network even becomes active when people lie, because a lying brain has to struggle between the strong habit of telling the truth and its new goal of deception.

Miller and Cohen's model does a good job of explaining how the brain makes up its mind, but it doesn't capture the full story. Dopamine is only a simple signal, a flag raised at an unexpected reward. It can work only if the brain has already set up its scale of rewards, so that it can decide that some things are more valuable than others. Why should rolling seven at the craps table feel rewarding, while the sight of a starving child doesn't?

Emotions are the reason why. Human emotions are descended from ancient programs that guide animals away from things that might harm them and toward the things they need to survive. The chemicals produced in our bodies at the sight of an oncoming truck aren't profoundly different from the ones produced in a mouse by the sight of an oncoming cat. In both human and mouse, a surge of hormones speeds up the heart and creates an urge to flee or hide. If a mammal is frustrated in a scarch for sex or territory, it may become enraged. If it is separated from its family, it may feel anxiety.

In the Renaissance many philosophers and physicians believed that the mind's emotions made themselves felt in the body by mystical sympathies. But Thomas Willis reduced the connection to a network of nerves - specifically, the nerves that sprout from the brain above the spinal cord and send their branches to the face, heart, lungs, bowels, and groin. Today, these nerves still carry a name redolent of Renaissance mysticism: the sympathetic and parasympathetic divisions of the nervous system. But thanks to Willis, cosmic sympathies have given way to the mechanical motion of spirits carrying emotional signals from the brain to the body.

Today, neuroscientists are mapping the pathways of the emo­tions within the brain itself. Fear, for example, depends on the amygdala, an almond-shaped bundle buried deep in the temporal lobe. The amygdala encodes the primal fears we are born with and cements our association with new terrors. Fear strikes us suddenly because the amygdala doesn't have to wait for the higher regions of the brain to work over sensory information or run it through some abstract set of rules. Neuroscientists have been able to activate the amygdala by flashing pictures of angry faces at people for only forty milliseconds - too fast for them to become consciously aware of them. In that brief time, the amygdala may be able to take a rough measure of a situation and detect anything that looks or sonnds particularly dangerous. It then sends out a signal that makes hormones race through the body to prepare it to react. In other words, the amygdala acts almost like a little brain unto itself.

As the prefrontal cortex evolved in our ancestors, it became intimately wired into these primitive circuits, transforming simple emotions into nuanced feelings. One region, known as the orbitofrontal cortex, came to play a particularly crucial role. It takes in signals from many emotion-linked regions of the brain and then crunches them like a hedge fund manager, making calculations about the relative value of things. It makes us savor the taste of chocolate when we're hungry and recoil in disgust when we've had too much to eat. It puts emotional value on abstract things such as money by associating with them all the things they signify. While other parts of the prefrontal cortex handle the "how" and "what" questions in life, the orbitofrontal cortex appears to handle the "why." Discoveries like these show how foolish it is to try to dig a deep trench between emotions and rational thought. Emotions sharpen our senses, focus our brains, and help us remember things more clearly. The prefrontal cortex returns the favor by moderating the emotions.

pg 287
The Self and the Brain
...When GlaxoSmithKline launched an ad e campaign in 2002 for the antidepressant drug PaxilCR, their slogan was, "I'm back to being me." Me, in other words, is something distinct from the vagaries of the brain itself.

If that were true, these psychiatric drugs wouldn't alter a person who is healthy and free of mental disorders. But that's not the case, as a recent experiment showed. A group of healthy people who were given antidepressants for a few weeks became friendlier and more socially dominant. When were they their real selves - before the drug, or after? Perhaps, if the self is actually encoded in the brain's synapses, the answer is both. Perhaps the gulf between a brain scan and the person looking at it isn't all that wide.

This paradox is not new. We are still wrestling with the contradictions of Thomas Willis's neurology. Willis believed that the sensitive soul was a material system that encompassed the brain, nerves, and spirits, and that it coexisted with a rational soul that was both immortal and immaterial. Yet he was such a good neurologist that he ended up betraying his own claims. If we do have an immaterial soul, scientists today have no hope of finding it, because that which does not obey the laws of nature is beyond science's scope. And yet just about everything that Willis claimed that the rational soul does has fallen within its scope. The human brain uses distinctive networks of neurons to carry out them out, networks no different in kind from the sort that carry out the business of Willis's sensitive soul.

Reasoning, for example, leaves a mark on an MRI scan. Actually, it leaves many marks, because particular networks in the brain specialize in particular kinds of reasoning, such as deduction, induction, and analogy. These circuits did not appear in our brains out of nowhere. We can see their evolutionary precursors in other primates, who can discover abstract rules with their prefrontal cortex, just as humans do. Mathematics has an evolutionary heritage as well. The network we humans use to solve math problems includes regions of the brain that have other uses - for example, a region that also processes the meaning of words. But the network also encompasses one special strip of the cortex just over the left ear. This math zone is designed for creating an abstract "number line" on which we array numbers that we compute in our heads. Monkeys have rudimentary math skills - they can tell the difference between eight apples and nine, for instance - and they use a smaller version of our number line.

global workspace

The anatomies of reasoning and mathematics have been easier for neuroscientists to map than another faculty of the rational soul: its consciousness of itself. Progress has been slow in part because the words "consciousness" and "self" have a way of slipping around in the semantic mud. But neuroscientists are now taking little steps forward with some basic experiments. One of the most promising theories to come out of these experiments is that consciousness consists of a brain-wide synchrony. When we become aware that we are seeing or feeling something, a lot of neurons start producing synchronized pulses together between thirty and fifty times a second. It's possible that this synchronization joins together many parts of the brain at once, turning them into a giant global workspace where all our perceptions can assemble into a conscious whole.

Finding the mechanisms of consciousness will not mean that we lack a true self. It's just that this self looks less and less like what most of us picture in our heads - an autonomous, unchanging being that has a will all its own, that is the sole, conscious source of our actions, and that distinguishes humans from animals. All animals probably create some kind of representation of their bodies in their brains, and humans simply create a particu­larly complicated model. We infuse it with memories, embellish it with autobiographies, and project it into the future as we ponder our hopes and goals.

Zoon politicon (Aristotle)

The human self did not reach this complicated state on its own. Thought is more like a node in the social network of our species. All primates are remarkably social creatures, and our ancestors ten million years ago were no different, depending on one another to escape leopards and to fight other primate bands for fruit trees. Under these conditions, our ancestors evolved into political animals, capable of creating coalitions and settling conflicts. They squabbled over food, competed for sex, and made their way up and down the social hierarchy. By five million years ago our ancestors had become upright walkers who probably traveled in bands a few dozen strong. They evolved an ability to understand the minds of other people and to predict what other people would do. They found happiness in cooperation and trust, which helped them search for food and shelter together.

The result of this evolution was an awesome social computer. The human brain can make a series of unconscious judgments about people - recognizing their faces, judging their emotions, and analyzing their movements - in a fraction of a second. In recent years, neuroscientists have been mapping out the networks that make this social intelligence possible, and one of their most astonishing discoveries is that a picture of the brain thinking about others is not all that different from a picture of the brain thinking about oneself. Some neuroscientists think the best explanation for this overlap is that early hominids were able to understand others before they could understand themselves.

As strange as this might sound, it makes evolutionary sense. There could have been a huge advantage to a hominid in understanding the intentions, feelings, and knowledge in the brains of others. Only later did a full-blown human self emerge from the same neural circuitry, like a mental parasite. This theory might help explain the way our brains sometimes blur the line between ourselves and others. Our overlapping circuitry may make some people prone to projecting what they can't reconcile with themselves onto someone else. Our own thoughts become communications from aliens transmitted through the fillings in our teeth. A ghost nudges the pointer on a Ouji board. A divining rod dips.

The self, neuroscientists are finding, has an ancestry, a physical wiring, and biological weaknesses. So do consciousness, reasoning, mathematics, and the other faculties that Thomas Willis believed were the business of the rational soul. And the same holds true for what Thomas Willis believed was the highest calling of the soul and the ultimate purpose of all his anatomizing - understanding what is good and bad.

For Thomas Willis, morality was a straightforward matter. God endowed man with a rational soul, which determines right and wrong throngh reason. Willis founded the science of neurology on this belief, convinced that only with a healthy brain could a rational soul exercise right reason. The delusions of a fever and the rantings of a false religion were equally dangerous threats to a person's moral judgments. Ultimately, a clouded brain could deprive a soul of salvation.

Thanks to Willis's wayward student John Locke, philosophers in the eighteenth century stopped looking to the physical workings of the brain to understand morality. An Enlightenment philosopher looked instead in the realm of ideas and reason. Immanuel Kant argued that reason alone showed that morality boiled down to a few rules: that we must not use other people purely as a means to our own ends, and that one should personally follow a maxim only if it could be turned into a universal law. In later years, other philosophers, such as John Stuart Mill, fonnd another explanation for right and wrong: they are measures of the happiness brought to the greatest number of people. While Mill and Kant might disagree about the foundations of morality, they agreed on one thing: we make moral judgments by reasoning about right and wrong, which are part of the real world that lies outside the mind - a school of thought known as moral realism.

In recent years, a growing number of philosophers have become skeptical about moral realism. No matter how moral realists try to prove the objective reality of moral judgments, sooner or later they all end up sounding like the parents of little children, driven to saying, "Just because!" Why is setting a cat on fire wrong? Because it causes unnecessary suffering. Why is unnecessary suffering wrong? Because a person who is fully informed and fully rational would say that it is wrong. Why would such a person say it is wrong? Just because!

The Enlightenment philosopher David Hume was the first to declare that we do not approve of good acts because we rationally recognize them as good, but because they just feel good. Likewise, we call things wrong because we have a feeling of disgust for them. Moral knowledge, Hume wrote, comes from an "immediate feeling and finer internal sense," not by a "chain of argument and induction."

Hume's ideas were promptly buried in Kant's avalanche of reasoned morality and would not be dug up again for a century, when Charles Darwin realized that evolution shaped not only bodies but thought as well. If philosophers really wanted to answer many of their biggest questions, they should get back to natural philosephy, to Thomas Willis's approach to the brain "Origin of man now proved," Darwin wrote in his notebook in 1838. "Metaphysics must flourish. He who understands baboon would do more toward metaphysics than Locke."

Social Intuition

Inspired by Hume and Darwin, today's opponents of moral realism have created a new theory of moral judgment. Calling themselves social intuitionists, they argue that when people decide what is right or wrong, reasoning plays a minor role. Most of the time, moral judgments occur in the hidden world of unconscious emotional intuitions. These intuitions have a long evolutionary history in our primate ancestors.
Groups of chimpanzees, for example, will punish misbehaving individuals. A zookeeper once witnessed this proto-morality when he began to feed his chimpanzees only after they all had come into an enclosure. Sometimes a few young chimps dallied outside for hours. The other chimps would remember their misdeed, and attack the stragglers the following day.

Chimps may be smart, but they don't read Kant. The stragglers were punished not because the chimps reasoned about their behavior, but because they got angry. According to the social intuition model, similar emotional responses underlie human moral judgments as well.

Social intuitionists don't claim that humans are hardwired with one type of morality. That would be like saying we are all hardwired to speak Hindi. All people are born with an instinct for learning the rules of grammar, but depending on where they grow up, they become fluent in Hindi or English or Farsi or Xhosa. As F; children are learning languages, they are also picking up the particular morality of their culture. They end up with both a mother tongue and a mother morality. These intuitions make us judge other people in certain ways, and they also influence how we conduct our personal lives. But if the brain's circuitry is damaged, these intuitions may not form, and a child may not develop into a moral adult. There's evidence of this sort of damage in the brains of psychopathic criminals. They fail to respond to the sight of a crying child the way other people do - even nonpsychopathic murderers feel a twinge.

Social intnitionists do not ban reason from moral judgments altogether. We use reason to sort through a complicated dilemma, but it's a slow operation that runs awkwardly compared to our swift intuitions. More often, reasoning brings up the rear, creating after-the-fact justifications for our snap judgments.

Evolution of Thinking
Evolution of Communication
Evolution of Language
Evolution of Sociality



HOME | SAL | TEXTE | BOE