The Emergence of Silicon-Based Consciousness

Christopher Altman


Pierre Laclede Honors College

christaltman@artilect.org



"Any sufficiently advanced technology is indistinguishable from magic."

Arthur C. Clarke’s Third Law



The study of consciousness and its origins has experienced radical breakthroughs with the advent of emerging technologies in the field of neuroscience. Once overcast with a veil of mystery, the inner workings of the brain have undergone profound illuminations made possible by rapid advances in the computing industry, coupled with insights gleaned through medical science. New technologies such as MRI scanning have enabled scientists to understand the neurological correlates of mental processes in ever finer detail, allowing them to glimpse the interior of a fully-functioning brain in real-time. However, even with these exponential advances in the field of consciousness, laying a theoretical foundation to explain precisely how it arises currently remains a cryptic and insurmountable task. The most important scientific discovery of our time will be when this problem is resolved.

PROPOSED THEORIES OF CONSCIOUSNESS

The scientific community has generally accepted that consciousness is an emergent system-level feature of neurophysiological processes. Exactly how our individual subjective experiences arise has been a matter of long-standing debate in both scientific and philosophical circles, but there are a number of currently proposed theories that attempt to resolve the mystery of consciousness. Among these theories are (1) consciousness is a feature of synchronized resonance within neurons in the frontal cortex, (2) consciousness results through the mechanism of quantum coherence in neuron microtubules, and (3) consciousness is an emergent property of complex systems.


RESONANCE

The neural resonance hypothesis was first explored by Francis Crich of the Salk Institute and Christof Koch of the California Institute of Technology. They observed that certain areas of the brain critical for awareness fire in complex, organized patterns. Large groups of neurons in the frontal cortex fire in synchronous pulses — in the visual cortex this occurs at 40 cycles per second. In much the same way as a symphony composition is produced by a fusion of complementary melodies, consciousness may arise as harmonic standing waves are formed among immense numbers of neurons spread throughout the brain. These waves may form a kind of working memory that allows for the formation of a unified consciousness.


COHERENCE

An alternative and controversial explanation of consciousness was originally put forward by renowned mathematician Sir Roger Penrose. Penrose teamed with anesthesiologist Stuart Hameroff, positing that consciousness is a product of quantum interactions occurring within microtubules - small, slender structures that form the skeleton of all eukaryotic cells. Fluctuations occurring at the quantum level could produce quantum coherence capable of influencing neuron activity at the macroscopic scale.

A common objection made in the scientific community is that under normal conditions these types of quantum effects have only been seen to occur near absolute zero. At the temperatures found within biosystems, the level of random noise found within the system would likely rule out coherence unless some unknown factor comes into play. However, nature has had billions of years to overcome this obstacle; it is possible that microtubules have evolved to promote coherence. Simulations underway by the Tuscynski Biophysics Group at the University of Alberta are examining this and other questions.


EMERGENCE

A third mechanism put forward to explain the mystery of consciousness holds it to be an emergent property of complex systems. Ernest Nagel (1961) and Brain McLaughlin (1992) cite Mill's 'Of the Composition of Causes' chapter of System of Logic (1843) as the locus classicus on the notion of emergence. As applied to neuroscience, consciousness originates at a fundamental level involving information processing and is expressed when this processing reaches a certain level of complexity.

High-level feedback mechanisms that evolved to respond to rapidly-changing sensory input may give rise to a subjective awareness of mental activity. This coherent, unified sense of self, arising from the interaction of many unrelated subsystems would play a role in evolution by ensuring the survival of the individual through long-term decision making and goal-oriented behaviors. Consciousness is critical for abstract reasoning and long-term planning. A thinking, conscious individual can better evaluate and adapt to changes in its surrounding environment. Once a system reaches a critical level of complexity, this integration of neurological functioning may allow the formation of consciousness.


THE POTENTIALS OF ARTIFICIAL INTELLIGENCE

Given that our sense of self arises from and is dependent upon the brain’s physiological processes, the possibility of engendering consciousness into alternative substrates becomes much more realistic. The human brain can be viewed as a biomechanical machine whose development is governed by the interplay of tens of thousands of different genes. This interplay determines synaptic growth, neurotransmitter production, and a myriad array of other functions that constitute the brain.

However, the human nervous system is a remarkable instrument of bewildering complexity - the most advanced system in the known universe. To recreate the human mind, one must successfully emulate its processes — a daunting task. When examining the operations of the mind at work, it is necessarily true that introspective decision-making processes are more accessible than are those that underlie awareness. Those that are available to conscious examination can be formally defined.

Analysis of the decision processes involved in chess playing has proven much more decipherable to artificial intelligence researchers than the processes involved in vision. Consequently we are capable of building chess-playing computers that can best the top players in the world, whereas building a robot that can navigate open terrain has proven a notoriously difficult task. There are many schools of thought in the field known as artificial intelligence, but they can generally be broken down into two approaches: the top-down and the bottom-up.

The top-down approach attempts to mimic human intelligence by the application of precise rules to the thinking process. One example of this is expert systems: programs which are given a large body of information about a specific subject and generally perform well within their parameters, but fail miserably outside a given subject range. Conversely, the bottom-up approach to AI attempts to build models from the ground up, modeling neurons to simulate the process of learning.


LIMITATIONS OF THE TOP-DOWN APPROACH

Of the two, the top-down approach showed impressive results at first but has disappointedly failed to produce significant recent advancements. The number of ‘rules’ governing behavior is simply too vast and undefined, frequently manifesting what appear to be conflicting parameters of operation. This is in large part due to the fact that the brain itself is a continuously evolving, massively parallel-processing system that merges digital and analog computation and does not follow the black-and-white decision-making process of digital programming. Information is instead processed via cascading waves of electrical signals that are augmented and controlled by hundreds of neurotransmitters and hormones, forming a vastly complex system that is nearly opaque to the probing of modern science.


RISE OF THE BOTTOM-UP APPROACH

Due to the complexity of the systems that comprise the human brain, the bottom-up approach of AI was initially far too formidable for AI researchers. The vast numbers of neurons - upwards of 100 billion - and associated interconnections - averaging 10,000 per neuron - found in the human brain were simply too complex to fathom. Processing power was not up to task for such a mammoth undertaking.

But given the current state of computing technology and the pace at which it continues its exponential growth, this is an approach that shows much promise in taking up the slack where the top-down approach has been unable to succeed. New approaches in parallel-distributed processing break larger problems into manageable pieces and allow researchers to solve previously intractable problems in a relatively short amount of time. A powerful tool made practical by recent advances in technology is artificial evolution.


ARTIFICIAL EVOLUTION

Artificial evolution utilizes computers to evolve complex systems from simple initial states following the same rules as Darwinian evolution. Recall that all life on earth can trace its origins to the random interaction of simple molecules early in the earth’s history. These molecules combined to form complex proteins, and those that were easily reproducible naturally overran the untapped environment. From those early protein strings evolved DNA, which can itself be seen as a protein computer which processes information in base4 as opposed to base2 coding language. All life - including sentient, self-aware and eternally questioning human beings - evolved from this primal soup of molecules.

Self-organization of complex, thinking creatures from initial disorder is an extremely powerful example of chaos theory, in which complex behaviors arise from a set of simple initial conditions. In computer simulations with artificial life, this trend towards increasing complexity not only occurs routinely, it is accomplished more easily and is more inevitable than is usually considered. This holds profound implications for the question of life elsewhere in the cosmos and suggests that the mathematical architecture of the laws governing the universe is not only capable of supporting life but is predisposed to evolving it.


GENETIC ALGORITHMS

The primary tool AI employs for the evolution of complexity is a genetic algorithm— the software equivalent of genes found in nature. Specific software instructions can be viewed as the equivalent of chromosomes found in DNA. A problem is approached by the creation of a sample population of GA’s, usually selected randomly. This process is repeated in parallel until a suitable solution is found. Each algorithm is rated at a certain fitness level according to how well it approaches the problem and then the highest scoring algorithms are paired and mated, creating the successive generation. Random mutations are introduced at the time of pairing, introducing new possibilities into the evolution process.

Most algorithms will not meet fitness standards and quickly die off, but a few will survive and pass their genes to the successive generation. Over a period of many generations, the process can quickly be fine-tuned to produce a remarkably efficient solution for the given problem. Currently, state-of-the-art GA’s can be evolved not only on software, but in physical medium on reprogrammable chips called field programmable gate arrays, or FPGA’s, which use reprogrammable logic gates. This allows for a diverse array of contributing effects to emerge that would otherwise be unseen, due to subtle variances in the physics of the electromagnetic fields that the chips generate. Researchers have found that GA-based FPGA’s sometimes utilize these effects to their advantage in finding solutions to a given problem — another example of the potential creative capacities of modeling Darwinian evolution.


SIMULATING EVOLUTION IN SILICO

"At this moment, computers show no sign of intelligence. This is not surprising, because our present computers are less complex than the brain of an earthworm. But it seems to me that if very complicated chemical molecules can operate in humans to make them intelligent, then equally complicated electronic circuits can also make computers act in an intelligent way."

Stephen W. Hawking

Employing computers to simulate evolution provides many advantages over the process that nature utilized to develop complexity. In nature, the development process depends on random interactions between molecules in the environment and occurs on a time scale of millennia. Mutation and progress occur at a snail’s pace when viewed from the time frame of silicon-based evolution, and mutation frequently brings about changes counter-productive to the survival of the species.

In a silicon-based medium, genes are allowed to pair and mutate at the rate of thousands of generations per second. Variables can be manipulated to increase the rate at which the system moves towards producing a desired solution. Computers will allow us to reproduce evolution at an exponential rate that increases in parallel to our processing speed.

The current trend of exponential growth we are now experiencing is a self-propagating feedback loop that will allow the development of goal-oriented complex systems which will rival the computing power of the human brain within a generation: following Moore’s Law, a parallel-distributed-processing network will rival human memory capacity by 2010.

A primary stumbling block to successfully reproducing the behavior of the human brain lies in the fact that neurons are in themselves complex processors that merge analog and digital computation to reach decisions. A single neuron works on a digital fire-or-none decision process but is influenced by the input of surrounding neurons and ambient level of neurotransmitter molecules in the surrounding vicinity.

Further, the sheer number of excitatory and inhibitory neural pathways that influence neural functioning is dazzling in its complexity. Traditional models of neural nets mimic the characteristics of biological neural nets to develop connections,8 but fail to incorporate the analog computations that influence the molecular-scale behavior of their carbon-based counterparts.

However, research advances have allowed this area much progress through the implementation of composite analog-digital neural models: this problem should recede as they grow increasingly more adept at modeling the large number of neurotransmitters and hormones that influence cognitive function. Studying the hundreds of specialized areas which control the functioning of the human brain has been another wall which limits developmental progress in artificial neural networks, but is one which will quickly be decoded as the capabilities of MRI scanning increase.

Soon it will be possible to achieve a level of resolution precise enough to view individual neurons firing in real-time models. This will be a boon for neuroscientists and AI programmers alike as neural pathways can be precisely mapped to analyze their functioning. It is increasingly apparent that the development of artificial brains with comparable abilities to ours will become a feasible goal.


BRAIN BUILDING

"Whatever one man is capable of imagining, other men will prove themselves capable of realizing."

Jules Verne

A prominent example of artificial evolution is taking place in Brussels, Belgium at a private blue-sky research laboratory, Starlab, under the direction of Prof. Hugo de Garis, a visionary researcher in the development of GA-evolved neural nets. Prof. de Garis has initiated the mammoth undertaking of constructing a 75 million neuron artificial brain that shows the potential to transform the face of artificial intelligence.

This network will consist of roughly one million modules of cellular automata (CA’s) which will grow and evolve at electronic speeds inside special hardware called a CAM-Brain machine (CBM). The CBM will update CA cells at a rate of 130 billion per second and evolve a neural net module in about one second. These modules are then assembled into humanly defined architectures, which are downloaded into a large RAM space updated in real time by the CBM. This massive artificial brain will used to remotely control a robotic kitten body and be capable of complex behaviors in response to external sensory stimuli.

If the field progresses as many expect, the second generation of artificial brains will be completed around 2006 and possess 10 billion neurons; the third will near completion in 2011 at 1000 billion neurons. The increasing pace of our technological developments requires the examination of potential societal implications of AI research. de Garis believes future advances in technology will enable us to create "artilects," or artificial intellects, with many times the memory capacity of ourselves. According to de Garis, the issue of whether or not to build these artilects will likely cause a major division in humanity and lead to a global war before the end of the 21st century.


DEEP FUTURE

As these models of the brain become increasingly indistinguishable from their carbon-based counterparts and develop more complex behavioral patterns, consciousness could appear through the interaction of many smaller-scale systems - just as it does in the biological brain - bringing about the age of artilects de Garis predicts.

Silicon-based intelligence would possess a number of advantages over carbon-based intelligence, including the capability to redesigning its own architecture to maximize efficiency. It could evolve at an astonishingly rapid pace, catalyzing a societal paradigm shift as it swiftly rises to become more capable at human-dominated tasks.

The distinction between man and machine will blur, with the possibility of humankind merging with its own creations. Mankind may be compelled to evaluate the possibility of granting recognition to a new species of sentient, silicon-based lifeforms of our own creation.

This and many other questions pose themselves to us as we stand on the brink of a chasm that may prove to be the most profound forward leap ever confronted in our collective memory. We are faced with the possibility of a new era, the dawn of an age in which mankind is no longer the most advanced species on the planet.


"The best way to predict the future is to invent it."



REFERENCES

De Garis, Hugo. The 21st Century Artilect: Moral Dilemmas Concerning the Ultra Intelligent Machine. International Philosophical Review (Review Int. de Philosophie), May 1990, World Wide Web Hypertext Link www.starlab.org/neurons, updated July 2000.


Franklin, Stanley P. Artificial Minds, MIT Press, Cambridge, Massachusetts, 1995.

Mill, John Stuart. (1843). System of Logic. London: Longmans, Green, Reader, and Dyer. (Eighth edition, 1872).

Moravec, Hans. Mind Children, Harvard University Press, Cambridge, Massachusetts, 1988.

Paul, Gregory S. and Cox, Earl D. Beyond Humanity: Cyber Evolution and Future Minds, Charles River Media, Inc, Rockland, Massachusetts, 1996.

Penrose, Roger. Shadows of the Mind: A Search for the Missing Science of Consciousness, Oxford University Press, New York, 1994.

Sayre, Kenneth M. Consciousness: A Philosophical Study of Minds and Machines, Random House, Inc., New York, 1969. Torrance, Steve. The Mind and the Machine: Philosophical Aspects of Artificial Intelligence, Halsted Press, New York, 1984.

Winston, Patrick Henry. Artificial Intelligence, Third Edition, Addison-Wesley Publishing Company, Reading, Massachusetts, 1992.



Contact: christaltman@artilect.org, chris@umsl.edu

Homepage: http://www.umsl.edu/~altmanc/









 

HOME