Consciousness: Can Computers Be Conscious?

Leo Segedin   |   January 11, 2017 |   Print this essay


Over the last several years, billions of dollars have been spent on developing self directed computers, machines that would duplicate the functions of the human brain. The Pentagon alone will spend 3 billion dollars over the next five years to develop such machines for military purposes. Neuroscientists believe that it is just a matter of time before consciousness will be duplicated. They expect that they will soon understand how the brain produces consciousness and be able to simulate it on computers. Computers will then ‘imitate brain circuits in silicon'. Computer programs will be ‘artificial architectures of electronic chips that mimic the operations of consciousness in real neurons and circuits’. ‘Artificial Intelligence’ researchers claim to have already developed computers which have ‘the ability to learn abstract concepts and approach human level performance on some tasks’. In fact, so passionate are some believers in the future of consciousness research that, like evangelists, they promulgate their beliefs and demean people who disagree. For these neuroscientists, science has become ideology, scientific theory, dogma. For example, the famous neuroscientist, Christof Koch, President and CSO of the Allen Institute of Brain Science, wrote, in the September, 2016, Scientific American:
Some philosophers, mystics, and other confabulatores nocturni pontificate about the impossibility of ever understanding the true nature of consciousness, of subjectivity. Yet there is little rationale for buying into such defeatist talk and every reason to look forward to the day, when science will come to a naturalized, quantitative and predictive understanding of consciousness and its place in the universe.
But, is this optimism justified? Can computers be conscious? I will say no. I argue in this paper that, at the present time, such machines are impossible. First, scientists cannot agree on what consciousness is. It is unlikely that they will ever agree. Although many different definitions have been proposed, none offer the possibility of a computer program which might produce consciousness. Second, at the present time, there is no physical or theoretical evidence that anything not alive can be conscious. Although most literature on computer consciousness ignores this fact, there is no convincing reason to believe that any machine, regardless of how complex, can create conscious experience. Computers may have simulated, but they have never duplicated brain processes. There is no existing evidence that they ever will. The belief that they can is based on a misleading model of the computer. Thus, all present efforts to create such machines are misguided.


Everyone who is not unconscious or dead knows what consciousness is. Even though we don’t understand it, we know what it is because we are aware that we have it. We know when we are conscious and remember when we were not. Regardless of whether it is unreliable or irrational, it is, arguably, along with language, what makes us human. It is what makes us aware that we are human. The problem then is, why, if everyone knows what consciousness is, it is so difficult to explain. Although it has been studied in terms of biology, neurology, theology, philosophy, physics, chemistry, information theory, even technology, the consciousness we experience remains a controversial subject. For example, one researcher claims that his consciousness consists of:
  1. A feeling of presence in an external world
  2. The ability to remember previous experiences accurately, or even imagine events that have not happened,
  3. The ability to decide where to direct my focus,
  4. Knowledge of the options open to me in the future, and
  5. The capacity to decide what actions to take.
According to other researchers, however, ‘emotions, memory, self-reflection, language, sensing the world, and acting in it’ are all irrelevant. They claim that consciousness exists independent of these qualities. And, on a more theoretical level, some neuroscientists define consciousness in terms of what they call ‘Integrated Information Theory’, in which consciousness is a state of matter which can’t be computerized, while others conjecture a ‘Global Workspace Theory’ in which it can. Thus, there is no agreed upon definition. In spite of the number of ways it has been defined, consciousness remains a contradictory, rather mysterious subject.


Theories defining consciousness have a long history. For hundreds of years, philosophers have pondered the question of what it means to be conscious. They are still asking this question today. Just what is consciousness? Is it a function of the human body? Is it independent of the brain? Is it an immaterial spirit? If it is, is it a soul or some other entity which can survive the death of the body? If it is not physical, does it even exist? Is it the product of computer-like calculations? Is it an ‘emergent phenomenon or an ordinary feature of brain processes, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems?’ And, most recently, is it a different form of matter or a property of all material? Does it consist of nuclear ‘bits’ of information, as the neuroscientist and psychiatrist, Giulio Tononi, maintains?

Some brain theoreticians have argued that, because it has no objective form, consciousness does not exist, is illusory or unknowable. But most neuroscientists believe that because physiological processes in the brain underlie consciousness, by learning the neurology of the brain, they will be able to explain consciousness. They hope ultimately to correlate every subjective experience with particular neurons in the brain. In maintaining that they are approaching the problem of consciousness objectively, they generally ignore the results of studies of introspection by psychologists. For them, introspection without physical correlation is an unreliable source of information about consciousness. They point out how possibly irrational, unconscious brain processes precede conscious awareness and actions. Also, behavior can be guided by unconscious perception. Autism, schizophrenia and other mental aberrations distort awareness. Probes, drugs, electric shock and concussion can affect its processes and the general character of thought. According to some of their views, all we can know about ourselves is based on what we perceive in others. Thus, there is no privileged, first-person point of view. From this point of view, introspection doesn’t even give us access to our own minds.


If introspection is an unreliable approach to consciousness, what then do these scientists propose is an objective, reliable approach? What are the describable, physiological processes which underlie consciousness? What should neuroscientists study? Without models of the brain intended to explain such processes, neuroscientists would not know what to look for, but their present models have not led to any consistent conclusions about the nature of consciousness or the brain. There are consciousness experts who maintain that the fundamental properties critical to the understanding of consciousness are biological, that is, only an actual brain or something organic like it can be conscious. Others maintain that the critical properties are not biological, but functional. They claim that mental states are identical to certain functional states or arise from functional organization. These different theories have led to incompatible assumptions. Biological forms cannot be duplicated; functions can. If consciousness is fundamentally biological, computers can’t be conscious (There are no biological computers), but if brain functions can be duplicated, they can. Thus, from these points of view, whether computers can be conscious remains an unresolved, theoretical question.


Most neuroscientists assume that human consciousness is a biological product, but don’t know how the brain produces it. As a result, they approach the problem by asking, If we don’t understand how the brain causes consciousness, what is the brain like that we do understand and will explain it. The brain then takes on the characteristics of what it is like. Over the last 100 years, the brain has been likened to switchboards and telephone exchanges, to radio networks and now, to computers. At one time, consciousness was likened to a theater, in which an observer watches his perceptions. Later, according to Freud, it was like a hydraulic system, in which nerves were likened to pipes, conveying impulses as if they were pressure waves of water. In the first part of the 20th century, consciousness became a collection of behavioral reflexes and, recently, computer, software programs. Late models are based on quantum processes and on information theory. Each of these models leads to a different line of research.


The dominant way to make consciousness comprehensible today is to reduce the brain to its functions. Because one of the functions of the brain is its ability to compute, scientists have modeled the brain after the computer. For them, the brain becomes a biological computer. And, since the computer simulates the functions of the brain, the computer becomes a mechanical brain. For many neuroscientists, ‘consciousness consists of information processors and digital computer programs.’ In this model, ‘our thoughts are digital computer programs, our mental states, computer states and mental processes are computational processes.’ Accordingly, because biological matter is unnecessary, there is no essential connection between consciousness and the brain. If computer simulations of mental phenomena can literally produce mental phenomena, the brain is unnecessary. In other words, a simulation will produce the same results that what it simulates does. In this model, the brain is only one kind of possible, conscious computer among many others. Any other computer with the right program would also be conscious. Although they can be made out of any material, if they are structured like the brain, they will be conscious. And, according to Koch, ‘And once you’ve reduced your consciousness to patterns of electrons, others will be able to copy it, edit it, sell it, or pirate it. It might be bundled with other electronic minds. And, of course, it could be deleted.’


In this model, the brain and the computer take on each other’s characteristics. Because much of the language describing them is the same, they become metaphors for each other. However, there is a problem created by this kind of thinking. When metaphors dominate thinking, metaphoric language is unthinkingly assumed to literally describe its subject. Objects are described as if they were their metaphors. Thus, when the brain becomes a metaphor for computers, computers take on human ‘abilities’. They ‘think’, ‘plan’, ‘learn’ and ‘perceive’. And, accordingly, brains as computers ‘transform inputs into outputs’ and ‘process’, ‘store’ and ‘retrieve’ ‘information’ and so forth. Such metaphors are valuable, but, although they have led to useful insights, they also can be deceptive. Things which are similar in some ways may differ significantly in others. By attributing characteristics of the brain to the computer, they create misleading models of the computer. The idea that an electronic, digital computer can be conscious is based on the belief that because the brain is a computer and is conscious, computers can also be conscious. The difference between a brain and a computer then becomes their complexity. Accordingly, when computers become as complex as the brain, they will be conscious. And when they become more complex than the brain, they will not only be smarter, but also more conscious, with all their possible beneficial or dire consequences. But, although these assumptions are based on the belief that the fundamental nature of the brain is its functions, there is no theoretical or physical evidence to support this view. In fact, there is no evidence that even the most sophisticated functions make brains or computers conscious. No scientific theory yet explains how functions might generate consciousness. No existing theory explains how neurons “code” information or how categories of thought, let alone the extreme complexities of consciousness, emerge from the patterning of neurons. And, most fundamental of all, functionalist theories cannot explain how mindless matter could be conscious experience.


The computer as a model for the brain is a metaphor. The belief that a human is a complex machine is also based on a metaphor. But the brain is not a computer and humans are not machines. Similarities are not identities. Simulations are not duplications. And, most fundamentally, although they have some characteristics in common (they both can carry electrical charges), silicon chips are not organic neurons. The brain metaphor attributes characteristics of living things to non-living things that non-living things do not and cannot possibly have. The belief that the difference between a human brain and a computer is the complexity of their structure is a confusion of metaphors with what they represent. Saying that scientists are simulating brain processes when they make computer programs has falsely implied that computer programs duplicate brain processes. It creates the belief that, if scientists can simulate a brain, they have created an identical duplicate that will function as a brain. But biological structures are not machines. Mechanical neural networks simulate, but do not duplicate biological neural networks.

A computer simulation of digestion cannot digest; a computer simulation of fire won’t burn; a computer simulation of a hurricane will not threaten Miami. Similarly, although it may process information, a computer simulation of the brain cannot think. Although computer simulations of brain processes are not as physical as digestion, fire and hurricanes, they are no more real as indications of possible consciousness. While metaphors may indicate similarities of brains to computers, the brain retains infinitely more complex and often mysterious differences. Unlike computer programs, consciousness is more than its structure. The fact that computers can calculate faster than the brain and have duplicated and vastly improved many human, physical functions does not mean that they will, some day, know what they are doing. Thus, to discuss simulations as if they are the same as what they simulate creates false realities.

It can even be argued that, in spite of the fact that it is called a computer, computers don’t literally compute. Humans compute. Like an automobile, a computer is a machine that implements the goals of a human. It has no goals of its own. Since computing is goal oriented, it implies self-direction and motivation, the conscious search for results, but there are no self determined motivations involved in electronic, computer programs. Computers must be programmed by humans to achieve results. Unlike neural networks in the brain, computer programs are lifeless, electronic (chip based) structures of algorithms. They process allegoristic programs mechanically. Although algorithms can be structured to describe the syntax of language, such structures have no intrinsic meanings. Only humans can create and understand meanings, that is, unless we consider simplified, mathematical, computational formulas to be meanings, semantics is not computable. Such programs involve only mathematical language; consciousness consists of more than language. 2 Also, simulations of brain structures in different materials can’t be duplications. Although silicon based computers can perform sophisticated human tasks, there is no evidence that silicon or similar materials can duplicate high level, carbon based, biological, mental processes as they function in humans. Technically speaking, ‘carbon molecules form stronger, more stable chemical bonds than silicon, which allows carbon to form an extraordinary number of compounds, and unlike silicon, carbon has the capacity to more easily form double bonds.’ It is highly probable that such physical characteristics of carbon, as found in human beings, are necessary for consciousness. Therefore, the possibility of duplicating characteristics and possible functions of carbon in silicon is purely hypothetical. Theories which consider silicon as if it was carbon create false realities. They result in unjustifiable predictions. Unless we define consciousness as already existing in some material other than the brain, there are no examples of a computer based on structures of silicon or other materials duplicating consciousness or even coming close. Such a possibility is a philosophical projection based on a misleading analogy. It exists only in words.

Structure is necessary, but insufficient. Even a dead human body, which is structurally identical to a living body, cannot be conscious. It is missing what makes it conscious. It is missing life, which, once it is gone, cannot be revived. Thoughts are generated in some unknown way by live neurons in connection with other live neurons. Dead neurons do not have this capacity. For the same reason, neither do artificial neurons. Machines with silicon neurons are not alive. Thus, like lifeless bodies, lifeless computers cannot be conscious.


Unless we accept the notion that mind creates matter, that it is a quality of all matter or is separate from it, it is most likely that consciousness is limited to brain activity. When brain activity stops, consciousness disappears. We can assume then that it is fundamentally a biological process and, therefore, a product of evolution. As a product of evolution, as in all living creatures, its existence and fundamental character must have been determined by its survival value. Consciousness then is, fundamentally, an awareness of what is necessary for survival. As such, it cannot be a disinterested awareness of the world. For the same reason, it cannot be converted into mathematical formulas, as some neuroscientists propose, without distorting its fundamental nature. Perception, the source of outside information for consciousness, selects what is necessary for survival and structures the resulting information functionally. Therefore, it is not objectively descriptive, but reflects human needs.

But also, as in any individual human being, consciousness is independent of its survival value. Not only does it vary according to its individual history and the resulting physical state of the brain; during mental illness, its content may actually be self-destructive.

Consciousness exists beyond its biology. It is not only a product of our genes, but is also our awareness of what has been recorded, ordered and created in our brains. Therefore, the content of human consciousness must consist of the language, memories, perceptions, feelings and motivations determined by our individual, life experiences.

We don’t think about being conscious; we think about what we are conscious of. Unless there is an aberration in our mental processes, we are oblivious to it. Of course, there is a physical world outside our bodies. But, although this world constantly impinges on our bodies, our consciousness is our only access to it. The physical world exists for us only as what we are conscious of. We assume the reality of this world, but do not normally consider that our awareness of it is a product of consciousness. But although we can distinguish between objects which exist independent of our bodies and those which are products of our minds, our awareness of people and galaxies is as much a product of consciousness as is our awareness of our thoughts and feelings. In fact, in a different context, some physicists have suggested that the act of looking itself affects realty. Thus, for all practical purposes, what affects our consciousness affects our reality.

We can represent objects in the outside, physical world because they have forms which we can perceive. We can study the content of consciousness. We can express what we are conscious of, but we take for granted the ‘we’ that is doing the expressing. Studying the ‘we’ leads to infinite regress. Although it may show signs of its underlying brain activity in PET and CT-scans, MRIs and EEGs (as a rash is a sign of measles), consciousness itself has no recognizable form. Unlike a book with content, it is not an entity which can be seen, described or studied from the outside. It has no appearance. There is nothing to perceive or represent. Like the air we breathe, it is invisible. Because it is entirely internal and personal, its content is not accessible to scanning machines. Such machines may determine physiological correlations with mental processes, but they cannot read our minds. We can only experience the content of our own consciousness; we have no direct access to anyone else’s. Thus, we know consciousness directly only by studying our own experiences and, vicariously, by reading what others have written about theirs. In other words, our only direct access to consciousness is through introspection. Thus, like all other humans, scientists also cannot have direct access to consciousness other than their own. As a result, they study its observable, physical correlates and the introspective reports of others. And, since there is nothing objective to perceive, they theorize.


In conclusion, in that I have no crystal ball, I cannot say that conscious computers are impossible. However, I can say that, unless we define consciousness to fit the limitations of recent theories, there is no existing evidence that they can be. On the basis of what we know now, computers cannot know what they are doing and act on that understanding. Electronic, digital computers are not alive and cannot be conscious because consciousness is not and cannot be the product of mechanical, silicon based computations. Intelligence is not consciousness. A machine with an IQ of 500, or even 5,000, will not be conscious. The most sophisticated, autonomous, problem solving program does not require the self-awareness of consciousness. It does not have to know what it is doing. It only requires the means to calculate. Although conscious computers are eagerly anticipated, there is no physical or theoretical evidence that they ever will be made. The belief that they will is an act of faith. We don’t even know how inert matter becomes biological matter, let alone how biological matter creates consciousness. Also, since there is no theory which explains how non-computational processes might be able to create consciousness, the belief that they will be able to do so is also an act of faith. This includes the belief that any physical technology, including quantum processes or non biological information ‘bits’, will duplicate biological, conscious processes. Also, a computer will never be able to include the unconscious, feel pain, lust and hunger or to create or respond emotionally to works of art. It may be possible for a computer to duplicate the patterns underlying the structure of such art works, but it will never be able to distinguish a great work from a mediocre one. It may be possible for such computer programs to fool a Turing machine, but not any sensitive person. Although computer consciousness is a scientific objective, at the present time, it is just as much a faith based fantasy as is the Holy Grail.


1. Chimpanzees are self aware, but, without language, their self consciousness is limited to recognizing themselves in reflections in mirrors and anticipating the impact of their actions on their environment. Dolphins and elephants also recognize themselves in reflections. Dolphins not only recognize themselves, but Russian researchers maintain that they communicate with each other in an elaborate, spoken language. What they talk about is unknown. Other animals and birds might also be self conscious, but, because their awareness is so primitive, I would maintain that, although their similarities are provocative, their differences are categorically more significant.)

2. In Turing’s time, a computer was a human. There were no mechanical computers in 1950.

Find this content at: