Consciousness, Natural Selection, and Knowledge
Cryonics Magazine, February 2013
This is the first entry in a new series of short articles about neuroscience and its implications for the field of human cryopreservation and life extension. In this article I discuss the relationship of the brain to consciousness and knowledge acquisition before venturing into more specific and practical topics
What is consciousness? Most of us understand the word in context, but when asked to define it we are suddenly at a loss for words or at best we offer a description that seems wholly inadequate. Scientists, philosophers, and religious scholars have debated the source, meaning, and nature of consciousness for all of recorded history. But with the rise of neuroscience over the past few decades, it now seems as though explaining the nature and mechanisms of conscious experience in neurobiological terms may be an attainable goal.
The recent work on consciousness by neuroscientists has left certain philosophers more frustrated than ever before, including the likes of Thomas Nagel and David Chalmers. They suspect that consciousness may be quite different and separate from the brain circuitry proposed to underlie it.
Consciousness has appeared to be a strange and undefinable phenomenon for a very long time. Daniel Dennett captured the feeling very nicely in the 1970s:
“Consciousness appears to be the last bastion of occult properties, epiphenomena, immeasurable subjective states — in short, the one area of mind best left to the philosophers. Let them make fools of themselves trying to corral the quicksilver of “phenomenology” into a respectable theory.”(1)
Consciousness no longer appears this strange to many researchers, but the philosophers just mentioned continue to hold that it may not be reduced to brain processes active in cognition. A common philosophical complaint is that any neurobiological theory of consciousness will always leave something out. What it will always leave out is the feeling itself — the feeling of what it is like to be aware, to see green, to smell flowers, and so on (Nagel 1974; Chalmers, 1996). These are so-called qualia — the experiences themselves — and these are what are important about consciousness. The philosopher making this argument may go on to conclude that no science can ever really explain qualia because it cannot demonstrate what it is like to see green if you have never seen green. Ultimately, they argue, consciousness is beyond the reach of scientific understanding.
By contrast, neuroscientists take for granted that consciousness will be domesticated along with the rest of cognition. Indeed, this work tends to assume that neuroscience will not only identify correlates of consciousness, but will eventually tell us what consciousness is. By and large, these neuroscientific efforts have been directed toward cortical regions of the brain, cortical pathways, and cortical activity. This is due, in part, to the prevalence of clinical studies of human patients with region-specific cortical lesions that are correlated with deficits in specific kinds of experiences. This tendency to focus on the cortex may also reflect the common knowledge that humans possess the highest level of consciousness of all animals and have proportionally more cortex than our closest relatives (and — so the supposition goes — therein lies the difference in levels of consciousness).
Another theory of consciousness, offered by Dr. Gerald M. Edelman, aims to resolve this “divorce” between science and the humanities over theories of consciousness. The premise of Edelman’s theory is that the field of neuroscience has already provided enough information about how the brain works to support a scientifically plausible understanding of consciousness. His theory attempts to reconcile the two positions described earlier by examining how consciousness arose in the course of evolution.
In his book on the topic, Second Nature: Brain Science and Human Knowledge, Edelman says:
“An examination of the biological bases of consciousness reveals it to be based in a selectional system. This provides the grounds for understanding the complexity, the irreversibility, and the historical contingency of our phenomenal experience. These properties, which affect how we know, rule out an all-inclusive reduction to scientific description of certain products of our mental life such as art and ethics. But this does not mean that we have to invoke strange physical states, dualism, or panpsychism to explain the origin of conscious qualia. All of our mental life, reducible and irreducible, is based on the structure and dynamics of our brain.“
In essence, Edelman has attempted to construct a comprehensive theory of consciousness that is consistent with the latest available neuroanatomical, neurophysiological, and behavioral data. Calling his idea Neural Darwinism, Edelman explains that the brain is a selection system that operates within an individual’s lifetime. Neural Darwinism proposes that, during neurogenesis, an enormous “primary repertoire” of physically connected populations of neurons arises. Subsequently, a “secondary repertoire” of functionally defined neuronal groups emerges as the animal experiences the world. A neural “value system,” developed over the course of evolution and believed to be made up of small populations of neurons within deep subcortical structures, is proposed to assign salience to particular stimuli encountered by the animal in order to select patterns of activity.
For example, when the response to a given stimulus leads to a positive outcome the value system will reinforce the synaptic connections between neurons that happened to be firing at that particular moment. When a stimulus is noxious, the value system will similarly strengthen the connections between neurons that happened to be firing at the time the stimulus was encountered, thus increasing the salience of that stimulus. When a stimulus has no salience, synaptic connections between neurons that fired upon first exposure to that stimulus will become weaker with successive exposures.
Importantly, the mapping of the world to the neural substrate is degenerate; that is, no two neuronal groups or maps are the same, either structurally or functionally. These maps are dynamic, and their borders shift with experience. And finally, since each individual has a unique history, no two individuals will express the same neural mappings of the world.
This brings us to the three tenets of Edelman’s theory:
1. Development of neural circuits leads to enormous microscopic anatomical variation that is the result of a process of continual selection;
2. An additional and overlapping set of selective events occurs when the repertoire of anatomical circuits that are formed receives signals because of an animal’s behavior or experience;
3. “Reentry” is the continual signaling from one brain region (or map) to another and back again across massively parallel fibers (axons) that are known to be omnipresent in higher brains.
Edelman thus believes that consciousness is entailed by reentrant activity among cortical areas and the thalamus and by the cortex interacting with itself and with subcortical structures. He suggests that primary consciousness appeared at a time when the thalmocortical system was greatly enlarged, accompanied by an increase in the number of specific thalamic nuclei and by enlargement of the cerebral cortex — probably after the transitions from reptiles to birds and separately to mammals about a quarter of a billion years ago. Higherorder consciousness (i.e., consciousness of consciousness), on the other hand, is due to reentrant connections between conceptual maps of the brain and those areas of the brain capable of symbolic or semantic reference — and it only fully flowered with hominids when true language appeared. Regarding language and its relationship to higher-order consciousness, Edelman explains:
“We do not inherit a language of thought. Instead, concepts are developed from the brain’s mapping of its own perceptual maps. Ultimately, therefore, concepts are initially about the world. Thought itself is based on brain events resulting from the activity of motor regions, activity that does not get conveyed to produce action. It is a premise of brain-based epistemology that subcortical structures such as the basal ganglia are critical in assuring the sequence of such brain events, yielding a kind of presyntax. So thought can occur in the absence of language….
The view of brain-based epistemology is that, after the evolution of a bipedal posture, of a supralaryngeal space, of presyntax for movement in the basal ganglia, and of an enlarged cerebral cortex, language arose as an invention. The theory rejects the notion of a brainbased, genetically inherited, language acquisition device. Instead, it contends that language acquisition is epigenetic. Its acquisition and its spread across speech communities would obviously favor its possessors over nonlinguistic hominids even though no direct inheritance of a universal grammar is at issue. Of course, hominids using language could then be further favored by natural selection acting on those systems of learning that favor language skills.”
Such a theory is attractive because it does not simply concentrate on conscious perception, but it also includes the role of behavior. We do well to keep in mind that moving, planning, deciding, executing plans, and more generally, keeping the body alive, is the fundamental business of the brain. Cognition and consciousness are what they are, and have the nature they have, because of their role in servicing behavior.
An important element of Edelman’s theory that consciousness is entailed by brain activity is that consciousness is not a “thing” or causal agent that does anything in the brain. He writes that “inasmuch as consciousness is a process entailed by neural activity in the reentrant dynamic core it cannot be itself causal.” This process causes a number of “useful” illusions such as “free will.”
Edelman’s theory of consciousness has further implications for the development of brain-based devices (BBDs), which Edelman believes will be conscious in the future as well. His central idea is that the overall structure and dynamics of a BBD, whether conscious or not, must resemble those of real brains in order to function. Unlike robots executing a defined program, the brains of such devices are built to have neuroanatomical structures and neuronal dynamics modeled on those known to have arisen during animal evolution and development.
Such devices currently exist — such as the “Darwin” device under development by The Neurosciences Institute. Darwin devices are situated in environments that allow them to make movements to sample various signal sequences and consequently develop perceptual categories and build appropriate memory systems in response to their experiences in the real world.
And though Edelman recognizes that it is currently not possible to reflect the degree of complexity of the thalmocortical system interacting with a basal ganglia system, much less to have it develop a true language with syntax as well as semantics, he nevertheless suggests that someday a conscious device could probably be built.
More ambitiously, Edelman also thinks that contemporary neuroscience can contribute to a naturalized epistemology. The term “naturalized epistemology” goes back to the analytical philosopher Willard Quine and refers to a movement away from the “justification” (or foundations) of knowledge and emphasizes the empirical processes of knowledge acquisition. Edelman is largely sympathetic towards Quine’s project, but provides a broader evolutionary framework to epistemology that also permits internal states of mind (consciousness).
1 Daniel C. Dennett, “Toward a Cognitive Theory of Consciousness,” in Brainstorms: Philosophical Essays on Mind and Psychology (Montgomery, VT: Bradford Books, 1978).