Jerome A.Feldman
From Molecule to Metaphor
A Neural Theory of Language
Mit Press 2008

A Brief Guide to the Book

The book is designed to be read in order; each chapter provides some of the underpinnings for later ideas. But it should also be possible to look first at the parts that interest you most and then decide how much effort you wish to exert. There are many forward and backward pointers that may help integrate the material.
Information processing is the organizing theme of the book. Language and thought are inherently about how information is acquired, used, and transmitted. Chapter 1 lays out some of the richness of language and its relation to experience. The central mechanism in my approach to the Neural Language Problem is neural computation. Chapters 2-3 provide a general introduction to neural computation. Chapters 4-6 provide the minimal biological background on neurons, neural circuits and how they develop. We focus on those properties of molecules, cells, and brain circuits that determine the character of our thinking and language.
Chapters 7 and 8 consider thought from the external perspective and look at the brain/mind as a behaving system. With all of this background, Chapter 9 introduces the technical tools that will be used to model how various components of language and thought are realized in the brain. A fair amount of mechanism is required for my approach, which involves building computational models that actually exhibit the required behavior while remaining consistent with the findings from all disciplines. I refer to such systems as adequate computational models and believe that such models are the only hope for scientifically linking brain and behavior. There is no guarantee that an adequate model is correct, but any correct model must be adequate in the sense defined above.
The specific demonstrations begin with a study of how children learn their first words. This involves some general review (Chapter 10) and a more thorough study of conceptual structure (Chapter 11) that is needed for word learning. The first detailed model is presented in Chapter 12, which describes Terry Regier's program that learns words for spatial relation concepts across languages. This theme of concrete word learning is then extended to cover words for simple actions in Chapters 13 and 14, which describes David Bailey's demonstration system.
The next section extends the discussion to words for abstract and metaphorical concepts. In Chapters 15 and 16, we look further at the structure of conceptual systems and how they arise through metaphorical mappings from direct experience. Chapter 17 takes the informal idea of understanding as imaginative simulation and shows how it can be made the basis for a concrete theory. This theory is shown in Chapter 18 to be sufficiently rich to describe linguistic aspect - the shape of events. This is enough to capture the direct effects of hearing a sentence, but for the indirect consequences, we need one more computational abstraction of neural activity - belief networks, described in Chapter 19. All of these ideas are brought together in Srinivas Narayanan's program for understanding news stories, discussed in Chapter 20.
Chapters 21 - 25 are about language form i.e., grammar - how grammar is learned and how grammatical processing works. Chapter 21 lays out the basic facts about the form of language that any theory must explain. Chapter 22 is partly a digression; it discusses the hot-button issues surrounding how much of human grammar is innate. We see that classical questions become much different in an explicitly embodied neural theory of language and that such theories can be expressed in standard formalisms (Chapter 23).
Chapter 24 shows how the formalized version of neural grammar can be used scientifically and to build software systems for understanding natural language. The poster child for the entire theory is Nancy Chang's program (Chapter 25) that models how children learn their early grammar - as explicit mappings (constructions) relating linguistic form to meaning. Chapter 26 discusses two questions that are not currently answerable: the evolution of language and the nature of subjective experience. Finally, Chapter 27 summarizes the book, and suggests that further progress will require a broadly based Unified Cognitive Science. But the scientific progress to date does support a range of practical and intellectual applications and should allow us to understand ourselves a bit better.

A version of the material in this book has been taught to hundreds of undergraduate students at UC Berkeley over the years. There were weekly assignments and most of the students actually did them. The course did not work for all the students, but a significant number of them came out of the class with the basic insights of a neural theory of language. If you want to understand how our brains create thought and language, there is a fair chance that the book can help.

Exerpts: Clicking the following links will give you more text.
...Much of molecular biology is concerned with how genetic material yields the various proteins and resultant organisms. Higher levels of biology also try to develop bridging theories. We can see the search for a neural theory of language as one such attempt, albeit an unusually ambitious one. These bridging theories are often developed as computer simulations, and this book follows this tradition.
I treat the mind is a biological question - language and thought adaptions that extend abilities we share with other animals. For well over a century, this has been the standard scientific approach to other mental capacities such as vision and motor control. But language and thought, even now, are usually studied as abstract formal systems that just happened to be implemented in our brains... We pursue four questions that must be asked of any biological ability:

How does it work?
How does it improve fitness?
How does it develop and adapt?
How did it evolve?

I. Embodied Information Processing page1

The Embodied mind
Human language and thought are crucially shaped by the properties of our bodies and the structure of our physical and social environment. Language and thought are not best studied as formal mathematics and logic, but as adaptions that enable creatures like us to thrive in a wide range of situations.
Our systems of abstract and metaphorical thought and language arise from everyday experiences and the basic neural learning mechanism.
Grammar consists of neural circuitry pairing embodied concepts with sound or sign. Grammar is not a separate faculty, but depends on embodied conceptual and phonological systems.
9 The rules of patterns of language are called constructions. Constructions integrate different facets of language - for example, phonology, pragmatics, semantics, and syntax. The request construction might specify a grammatical form, and intonation pattern, pragmatic constraints, and the intended meaning.
This integrated, multifaceted nature of language is hard to express and traditional theories, which focus on the separate levels and sometimes view each level as autonomous. But constructions can provide a natural description of the links between form and meaning that characterise the neural circuitry and delaying a real human language. They offer a high-level computational description of a neural theory of language (NTL).

Feldman 15-38
The information processing perspective: Neuroscientists speak of neurons as processing information and communicating by sending and receiving signals. They also talk of neurons as performing computations. In fact, neural computation has become the standard way of thinking about how the brain works. But neurons are just cells, tiny biological entities that are alive and function by means of chemistry. Why can he say that neurons process information and perform computations?
Communication and co-ordinated evolution:
Communication between cells is a major revolutionary advance and a prerequisite for the appearance of multicelled creatures like ourselves. Individual cells survive by carefully controlling their internal chemistry and it goes against their nature to allow outside agitators. Of the 4 billion years since life began, about two thirds was required to evolve the simplest multicellular organism and their coordination mechanisms. The basic mechanism of the communication is molecular matching…the emission and subsequent recognition of a signal molecule is the simplest form of communication among living things.

33 Functionalism
No one believes that the computer simulation of a weather pattern is itself anything like the weather itself: the ever-changing clouds, the heat, the rain in your face, the oppressive humidity, and so on. The model is clearly distinct from the reality being modelled.
But when we use a computer programme to simulate some function of the brain, we get into some delicate philosophical questions about exactly what is being done. One possibility is that, like the weather simulation, the computer is simply being used to carry out a formal, computational description of some process of the mind. The process is understood as being carried out by the physical brain, which is quite different from the programme used to model it.
A second possibility, however, has become a major intellectual position within Anglo-American philosophy, generative linguistics, cognitive psychology, and artificial intelligence. This position is called functionalism. In its strong form, it claims that the way the mind is physically embodied in the brain is irrelevant to the study of mind. Functionalism as principal is the opposite of an embodied theory, which suggests that everything important about language depends on the brain and body…
As we saw in previous chapters, scientists are all this study nature using various perspectives, and a functional analysis is usually involved. Almost everyone agrees that a functional level of description is needed for language and thought. Philosophical functionalism holds that everything important about language and thought can be understood completely using information processing models, without looking at the brain at all. An even stronger position claims that any information processing system of sufficient complexity will automatically have all the mental powers of the mind, including consciousness. This stance is also called strong artificial intelligence.
37 neural information processing
Neural information processing systems are sufficiently different from their electronic counterparts that it has proved necessary to develop special theories and simulation techniques for the neural case… neurons are a million times slower than electronic components. But each neuron is connected to thousands of others, most of which are active most of the time. In contrast, electronic computers are extremely fast, but have only local effects and only a tiny fraction of the elements are simultaneously active.

II. How the Brain Computes page 41
III. How the Mind Computes page 83

Feldman 95
Embodied Concepts: The first seven chapters summarised our magnificent neural machinery, how it develops, and how it can be studied as an information processing system. Almost all of that discussion applies to animals in general and there is much more to be learned by studying animals as information processing systems, adapting to their environment and goals. But this book is about one special adaptation, language, that is unique to humans. Human conceptual systems are inextriably linked to language.
96 The basis for concepts is categorisation. Categorisation occurs whenever a lot of data boiled down to a few all values. This happens in the retina and everywhere else in the brain, whenever a number of neurons signal to another neuron. Categorisation is not just a function of language. All living systems categorise.
Some philosophical traditions ask us to rise above our human categorisations and see the world as it really is, assuming some basic structure of nature that is independent of people. However, this is impossible for neural beings who evolved to do best-fit matching of input to the current context and goals. We have good reason to believe that there is a real physical world, but not that there is a privileged way of categorising it. People evolved to develop categories that match their situation and needs. These must be consistent with the facts about the physical and social environment or they wouldn't be of any use.

IV. Learning Concrete Words page 123

Feldman 125
129 There is an even more basic problem in learning how words refer to things in the external world, which is usually ignored, but is important for our neural understanding of language. If, as our theory suggests, the child's experience is the product of her neural and hormonal activity, why should she believe an entity is in the external world?
A simple and traditional, but inadequate, answer is that the world is inherently made up of fixed entities and our brains evolved to recognize and deal with these entities. But humans categorise experience in various ways according to their situation and needs.
One part of an adequate explanation for our belief in an external world involves this general human tendency to categorise inputs.

Feldman 135
Conceptual Schemas and Cultural Frames:The child's first words are labels for his or her experience, but not all experiences can be described with a single word. Actions such as „grasp“ or special relations such as „support“ inherently have multiple participants, or roles. Grasping requires roles for at least the grasper and the thing being grasped. Coordinated motor activities such as grasping are called motor schemas. The same term, schemas, is used to describe relational information as in the concept of „support“. Many of these cognitive structures are universal across all languages and cultures, and I refer to all of these as conceptual schemas or sometimes just schemas.

V. Learning Word for Action page 161
VI. Abstract and Metaphorical Words page 163

Feldman 185
Conceptual Systems:…We discovered how children learn to talk about their experience of spatial relations and motor actions. For perception, action, emotions, and so on, human experience and the social relations shared by all people provide the basis for learning words. We explored how children around the world learned their first words for colours, for naming things, for spatial relations, and for their own actions. The same basic labelling processes apply to many other aspects of direct experience, including the properties of objects and actions, personal desires, and family relations. The depth and breadth of the child's experience is remarkably rich and provides the source for all advanced concepts.This universal shared experience of children is still only a small part of what comprises adult conceptual systems and language. The three chapters of this section outline a theory of how abstract, cultural, and technical words and concepts arise from the substrate of direct experience. Neural embodiment remains central to the story - people, as neural systems, understand abstract ideas because these concepts are mapped to and activate brain circuits involved in embodied experience.
This interplay between direct experience and language-driven learning is the primary basis for the transmission of culture to children. As a child learns to deal with the world, family and community point out and label features of the physical and social environment they consider important. This inevitably controls the way the child perceives the world and organises knowledge and behaviour - it determines the child's conceptual system. One of the most heated controversies involving the brain and language is whether the language a person speaks limits what he or she can think about - often called linguistic determinism.

The Sapir-Whorf Hypothesis: A number of results from various labs now show language-related differences on some tasks. All of these results are tendencies - no absolute differences are known to arise from variations in the grammatical form of one's language. However, we now know that
the language people speak does have a measurable effect on how they think.

Feldman 194
How do people learn the concepts and language covering rich array of cultural frames? In particular, what does the embodied neural theory of language have to say about learning and using the language of cultural discourse?
The answer is metaphor. Metaphor in general refers to understanding one domain in terms of another. The NTL approach suggests that all of our cultural frames derive their meanings from metaphorical mappings to the embodied experience represented in primary conceptual schemas. The next few chapters elaborate on the related ideas of meaning as metaphor and simulation.

196 Metaphors for language and thought:
Thinking is perceiving
Thinking is moving
Thinking is eating
in which ideas are food, communicating is feeding, accepting is swallowing, understanding is digesting, and so on.
These examples are all clearly metaphorical. They are systematic. They involve applying the reasoning of the embodied (source) domains to the abstract (target) domain.They define a large proportion of our modes of comprehension of what ideas, thought, understanding, and communication are. Try having a conversation about thinking, communicating, and understanding for 10 minutes without using any of these metaphor's or any of the reasoning that arises from their use. You probably won't notice unless you pay close attention, but you will be using some of these metaphors.
Thought is language.
Thought is mathematical calculation.
The mind is a machine.

VII. Understanding Stories page 225
VIII. Combining Form and Meaning page 257

271 The Language Wars : Chomsky - Lakoff
283 Combining Meanings - Embodied Construction Grammar
295 Embodied Language
325 Remaining Mysteries:

Feldman 325
The mystery of language origins:
How did we come to develop a communication system is so much richer than that of other animals? This hotly debated topic is related to the „language gene“ controversy and is also popular in the media. Every few months, we get another story about how some new finding has solved the mystery of language origins.

328...the mystery of the origins of human language is not likely to be solved any time soon. But it is not a profound mystery. Everyone agrees that expressive language conveys very significant evolutionary advantages on groups that can use it.
Biological evolution moves too slowly to explain the rise of language (and modern civilisation) in just some thousands of years, but cultural evolution is easily fast enough. In a general way, it must be true that the genesis of language was neither a biological events nor individual learning, but the social phenomenon. The biological precursors, whether specific to language or more general, were almost certainly evolving well before the rapid rise of language. The mathematics of this kind of rapid change from a slowly evolving base is well understood as part of dynamical systems theory.
329 ...We discuss this ability in terms of
mental spaces.
I believe there is a plausible story about how a discreet revolutionary change could have given early hominids simulation capability that helped start the process leading to our current linguistic abilities. Mammals in general exhibit at least two kinds of involuntary simulation behaviour - dreams and play. While a cat is dreaming, a centre in the brain stem blocks the motor nerves so that the cat’s dream thoughts are not translated into action. If this brainstem centre is destroyed, the sleeping cat may walk around the room, lick itself, catch in imaginary mice, and otherwise appear to be acting out its dreams. There is a general belief that dreaming is important for memory consolidation in people, and this was also be valuable for other mammals. Similarly, it is obvious that play behaviours in cats and other animals have significant adaptive value.
Given that mammals do exhibit involuntary displacement in dreams, it seems that only one revolutionary adaption is needed to achieve our ability to imagine situations of our choosing. Suppose that the mammalian involuntary simulation mechanisms were augmented by brain circuits that could explicitly control what was being imagined, as we routinely do. This kind of overlaying a less flexible brain system with one that is more amenable to control is the hallmark of brain evolution, and no one it would be surprised to find another instance of this mechanism. Now, hominids who could do detached simulations could relive the past, plan for the future, and be well on their way to simulating other minds. Understanding other minds would then provide a substrate for a richer communication and all the benefits that accrue from the use of mental spaces.
One crucial component of
mental space reasoning is the ability to map ideas from one mental space to another. This is how we draw lessons from the past or change our plans after thinking about the consequences. People can predict what someone is likely to do based on what she says. So, our general simulation faculty must include the ability to maintain and exploit relational mappings. The learning of grammar could be very nicely modelled as learning relational mappings between regularities of linguistic form and the underlying meaning they convey, and some such mapping abilities seems to be required under any theory of grammar. Even more speculatively, the combined ability to imagine separate scenarios and to map them together is perhaps one of the foundations of many human capabilities, including grammar. This is close to the proposal of cognitive scientists Gilles Fauconnier and Mark Turner in a recent book „The Way We Think“ (2002). Whatever combination of biological and cultural evolution gave rise to early human language, it is no mystery that it developed rapidly, and, in all cultures, has a vast array of uses in human communication and thought.
We would love to know more about how language evolved, but it is unlikely that any theory of language origins would change our basic ideas of who we are and how the world works.

Trying to understand Jerome Feldman's book, I have attempted to condense his complex line of thought. You find this short version under: Feldman Ideas

Jerome A. Feldman - Papers

Cognitive Linguistics