Computers Are Probably Not Conscious, But Nobody Knows Why
This post is the text of a lecture that I will give to an audience of non-specialists at the University of Dallas.1
The question of whether a machine might be conscious in the same way that we humans are conscious, even though this controversy has been spurred into a fever pitch by recent technological advances, is really nothing new. Already in 1637, Descartes was entertaining the possibility that machines could be, as he puts it, “true men,” and he takes rational consciousness as the mark of authentic humanity (Discourse on Method, Hackett, p. 32). Though he spent some time considering the question, Descartes is quick to dismiss the very possibility of a machine possessing bona fide human consciousness for two reasons. First, Descartes claims that no machine could ever “use words or other signs, or put them together as we do in order to declare our thoughts to others,” and in particular “it could not arrange its words differently so as to respond to the sense of all that will be said in its presence, as even the dullest man can do” (p. 32). That is, Descartes takes linguistic competence as the mark of distinctive human consciousness, and he denies that any machine could ever match our ability to use words in the twisting fluidity of ordinary conversation. Humans, Descartes rightly observes, have an uncanny ability to say the right thing in a potential infinity of contexts that come and go with immeasurable speed. Thus, even if a machine could be made to “utter words,” Descartes is certain that no device will be able to do so with the contextual sensitivity of a human being. Interestingly, already in the 17th-century, Descartes had in mind something akin to the modern Turning Test for machine intelligence, which takes linguistic competence as its sole criterion, and he argues that no machine will pass such a test, because artificial things will never be able to match our linguistic dexterity.
Second, Descartes grants some machines may eventually outperform humans in various tasks, but “such machines would inevitably fail in other tasks” because “it is for all practical purposes impossible for there to be enough different organs in a machine to make it act in all the contingencies of life in the same way as our reason makes us act” (p. 32). In other words, Descartes claims that it is practically impossible to construct an inorganic system so sophisticated that it could respond to all the novelties with which humans deal effectively in our everyday lives. Once again, Descartes takes rational consciousness as a “universal instrument” that is not limited to any finite set of circumstances for its application, and there is no way to design a machine with enough working parts to address all these possibilities. Humans have a distinctive ability to figure out what to say and do “on the fly,” and this power, according to Descartes, is in principle unprogrammable. We might say that Descartes insists that human intelligence is general intelligence, and the generalities are so fluid as to rule out the possibility of any artificial general intelligence. These sorts of arguments that begin with the open-ended contextual sensitivity of human rationality and infer the impossibility (or steep improbability) of machine consciousness have been taken up by many subsequent AI deniers, many of whom would not otherwise be at all sympathetic to Descartes’ broader program in the philosophy of mind.
While Descartes is claiming to rule out machine consciousness in principle, as a sort of metaphysical impossibility, his claims seem to be based on what he takes to be technological impossibilities. That is, Descartes does not do much to convince us that machines cannot, as a matter of absolute necessity, be conscious, but only that it is impossible for us to assemble enough working parts that would constitute such a machine. No doubt, machine intelligence poses vexing engineering problems, which might very well outstrip our finite abilities to put such a system together. That, however, does not mean that machine consciousness is impossible as such, but only that we cannot pull it off (or it is highly unlikely that we can do so). One is left wanting to ask Descartes whether God or maybe an evil demon could design and implement a machine that satisfies Turning tests for open-ended competence. If so, then maybe we are not, by Descartes’ own standards, all that different from these remotely possible thinking machines. Moreover, even if we are not moved by extravagant thought experiments asking us to speculate about what omniscient and omnipotent beings might be able to design, the subsequent centuries of technological progress have done a great deal to temper any confidence we might muster for Descartes’ argument from technological impossibility. However remote we might think the current state of the art remains from the fabled goal of AGI, it is now much harder to dismiss that ambition as a mere fable. No doubt, even the most sophisticated LLMs and skill-learning robots do not exhibit the trans-contextual linguistic dexterity or generalized practical wherewithal that Descartes rightly demands. Nevertheless, anyone who has been tasked with grading college essays in recent years must admit that it is getting very difficult to differentiate the product of human consciousness from the deliverances of a chatbot. Maybe if Descartes had seen the possibilities being realized by modern algorithmic systems and robotics built on principles of non-cognitivist cognitive science, his confidence in his absolute denial of machine consciousness would be shaken no less than ours.
I have not come here to defend the notion of machine consciousness. Indeed, I agree with Descartes: Machines are not conscious. Actually, allow me to hedge a bit: my view is that it is quite unlikely that machines are conscious. I am going to make the case that we have very good reasons to conclude that machines are not conscious, so much so that the probability that any machine is conscious is exceeding low. I do, however, think that these reasons are insufficient to give us absolute certainty that no machine is or can be conscious, or that machine consciousness is metaphysically impossible. You might say that I am a machine consciousness skeptic, but not an out-right denier. I think that, given how our overall knowledge of the world stands now, it is very unlikely that any machine, including highly sophisticated computers, is conscious. Thus, if you forced me to wager my fee for giving this lecture on the question of machine consciousness, I would bet confidently on the “nay” side, but I would be betting and I would not say that it is impossible that I am going home empty-handed. Though, if anyone wants to make that bet, I’m game! The problem is that I am not quite sure how exactly we would ever settle the bet, because I believe that a final deciding of this question is very likely beyond our ken. My reasons against machine consciousness fall short of a complete demonstration because they are closely linked to a kind of skepticism about consciousness in general. That is not to say, however, that I am skeptical about the fact that there is consciousness. In fact, here too Descartes is correct: I’m not sure what it would even mean to consciously deny that there is consciousness. Though other people have saddled themselves with that seeming incoherence, it is not among my problems. Rather, I am skeptical about consciousness in the sense that I believe that we really do not understand it at all. We know we are conscious, but exactly why and how we are conscious remains completely opaque. We are in no position to say definitively which things are or are not conscious, because we really do not understand how consciousness arises in those cases that it clearly does. We might then be able to conclude that machines are very likely not conscious, though we cannot say that machines cannot be conscious because we simply do not know what would make anything conscious in the first place, whether it is an organism or a machine. Thus, if you were to ask whether any machine is conscious, I would confidently say “No!”, but I would not claim to say I know exactly why not.
Before I make that case, there are some common unclarities that I want to correct, which will help us understand what is actually at stake in the question of machine intelligence. Tarrying over the distinctions between and the connections among these concepts will help us understand the sort of skepticism about consciousness that I am recommending and its consequences for the question of machine consciousness. Thus, we begin by considering the notions of intelligence, consciousness, and rationality.
Intelligence. By I intelligence I have in my little more than powers for problem solving, independent learning, and acquiring skills. It is noncontroversial to claim that we humans have intelligence in common with other types of organisms. There is very good empirical evidence that higher mammals (dolphins, chimps, dogs, etc.) are able to solve novel practical problems and incorporate this learning into their habits, as it is equally obvious that all sorts of animals develop abiding skills that allow them to deal with contingent details in their environments. In other words, higher animals (and maybe much of the animal kingdom) are not merely instinctual beings running blindly on pre-scripted instructions. Rather, many animals figure things out. The range of our problem solving, learning, and skill acquisition seems to far outstrip that of any other animals (as far as we know), but in principle the presence of these powers in the human toolkit does nothing to differentiate us from many other species in any radical way. Recent advances in artificial intelligence and robotics have done much to support a similar conclusion regarding machines, i.e., when it comes to problem solving, independent learning, and acquired skills, we humans might not be substantially different from some machines. There are now famous examples of various bots solving novel problems and teaching themselves things. Moreover, ever since Rodney Brooks declared that “we can use the world as its own best model,” many roboticists have moved from trying to program their creations with pregiven maps of their environment and instead emphasized variable skill acquisition, and we have all seen some of the uncanny successes to which this has led. In short, I do not believe there really is much of a question anymore regarding machine intelligence. There clearly are intelligent machines. That is not, however, an earth-shaking philosophical revelation, since it has always been obvious to humans that intelligence is something we have in common with all sorts of non-humans. Thus, I am not particularly interested in the question of machine intelligence.
Consciousness. Here we arrive at a far more interesting consideration. To say that something is “conscious” is to claim that it has experiences, where an experience is a qualitative state. Suppose Smitty presses a stick into a snowbank. There is no way that this event feels to the stick. Nothing over and above the quantitatively describable processes of temperature changes is transacted as the stick passes into the snowbank. Sticks do not have feelings of cold (or anything else for that matter). Suppose now that Smitty shoves his ungloved hand into a snowbank. Certainly, much the same sorts of physical processes involving temperature changes, along with other straightforwardly physical processes transacted throughout Smitty’s body including his nervous system, will occur. There will also, however, likely be a feeling of stinging cold that will be experienced by Smitty as unpleasant. Moreover, Smitty’s sensations may also be accompanied by various higher-order emotions (humiliation at having been so silly, pride in having the courage to take the pain, etc.), none of which would color the event for the stick. Whereas there is no way it is like to be a stick stuck into the snow, there is a way (or several ways) it is like to be someone whose bare hand has been shoved into the snow. There are felt qualities attached to such an event, and the awareness of this what-it-is-like-to-be is what philosophers mean by consciousness in its most basic form. We are apt to recognize qualitative consciousness in non-human animals, but whether we should do so in sophisticated machines is among the central questions we are considering in this discussion. As I have said above, I am quite doubtful that any machine, however sophisticated, is conscious for reasons that we will see below.
Rationality. Our experiences are occurrences that happen to us, as opposed to what we do for ourselves. In classical logic, experiences are classified as passions, as they are passive qualities. If Smitty experiences the taste of chocolate, we do not look for an explanation within Smitty in any sense that would hold him responsible. We would not ask Smitty “Why are you experiencing the taste of chocolate?” and expect him to provide a reason that justifies his having such an experience. The cause of Smitty’s tasting chocolate is neither good nor bad, justified nor unjustified. This qualitative experience is simply what normally happens to someone whose tastebuds are put in the right proximity to chocolate. Smitty is a passive party to the experience. If, however, we asked Smitty, while rushing to the emergency room, “Why did you eat that chocolate?” we no longer see him as an entirely passive party in the occurrence, as we are calling him to account for this instance of eating. If Smitty replied to our question with an entirely physical-causal story, we would worry that he either missed the point of the question or he is somehow evading the issue. Our question does not demand an etiology of the incident, but a justifying reason for his eating the chocolate, e.g., “I was a bit hungry, and I never had any previous indication that I have a chocolate allergy. Thus, I ate the chocolate that Jones offered me.” We would likely accept such an explanation form Smitty as reasonable. We are asking Smitty to make sense of what he has done, and his reasons satisfy that request. Similarly, if we ask Smitty “Why do you say that there is a multiverse?” we are asking him to make sense of what he has said or what he believes in terms of reasons based on a set of evidential data. Notice that we do not pose these sorts to questions seriously to any other beings than humans (leaving aside possibly controversial supernatural candidates). I do not ask my dog why he has bit my now disgruntled houseguest, nor do I ask the stick for its reasons for any of its attributes. Rationality, as far as we know, is the exclusive domain of human beings, and I will argue below that the development of intelligent machines does nothing to move us toward extending the boundaries of this space of reasons.
Querying Smitty for reasons presumes that Smitty understands what he has done or believes and why he has so acted or believed. Moreover, someone’s understanding what he has done and why he has done so presumes that he can provide the appropriate reasons explanations for himself. We would expect Smitty to be able to tell us why he ate the chocolate that initiated his trip to the emergency room. An understanding that Smitty can draw on in a self-justifying way entails that Smitty’s actions and beliefs along with his reasons for acting and believing are objects of his consciousness. That is, Smitty’s rationality presupposes that there is way-it-is-like-to-be Smitty, and Smitty is aware of this and capable of describing and evaluating some of the contents of this state. Rationality then has consciousness as a necessary condition, i.e., a subject is rational, only if it is conscious. We might say that rationality is a species of consciousness, just as being a hippopotamus is a species of mammal. If the animal in the pen is probably not a mammal, then it is probably not a hippopotamus. Likewise, if something is probably not conscious, then it is probably not rational. Since I am about to make the case that computers, and machines in general, are not conscious, it likewise follows that they are not rational, which is the issue that should mostly concern us.
The case that I am going to make in favor of skepticism about machine consciousness is admittedly quite complicated. We can simplify this task considerably by structuring the account according to the following simple deductively valid argument:
If S is a system designed entirely for functionality and P is a non-functional property that is not intrinsically related to the parts of S, then P is prima facie probably not a property of S.
Computers are systems designed entirely for functionality and consciousness is a non-functional property that is not intrinsically related to the parts of any machine.
Therefore: Consciousness is prima facie probably not a property of any machine.
There are a few concepts in that argument that I am sure we will need to clarify before we can consider whether its premises are true. Let’s start with what I mean by a functionally designed system, which is any composition of parts structured to bring about some previously specified range of effects. Ideally, in a well-designed functional system, each of the parts is explained by its role in causing certain outcomes for which the system has been assembled. Thus, in such a system the parts have been selected because they have certain properties that contribute to bringing about the effect for which the system is designed. If E is the defining effect of a system designed exclusively for functionality, then every part of that system is explained in virtue of its properties contributing to the production of E. For example, take the stapler that is sitting on my desk as I type this lecture. The stapler has various tin and plastic components, springs and glides, blocking surfaces, etc., and the presence of each of these parts is explained by the fact that they have properties contributing to stapling effects. Given what we know about the materials available for stapler-construction, we would expect a certain mixture of malleability and stability in the tin and plastic components, hardness in the blacking plate, and an overall fit for the human hand and common stacks of paper. Thus, if someone asked us to speculate what a stapler might be like, even supposing we had never seen such a contraption, we would probably get in the ballpark of the properties I have just mentioned, based on our understanding of the function of a stapler and the available construction materials.
That is not to say that the stapler has only functional properties. Functionally designed systems might have all sorts of unwanted side-effects or otherwise accidental attributes that have nothing directly to do with their design. For example, the shine of the tin composing the sliding mechanism of the stapler makes no contribution to its stapling effects, nor does the fact that the stapler could pinch my finger when I don’t close the chamber with due care. Notice, however, that in these cases the presence of non-functional attributes is not mysterious or even altogether unexpected. We know that the sliding mechanism must be made from a malleable and yet stable material, and aluminum is an obvious candidate. Aluminum is naturally disposed toward shininess, so it’s no surprise to find that accidental attribute in a stapler. Likewise, given how the closing mechanism is likely to work, we are not surprised, and would even expect, there to be a risk of minor injury to the inattentive user of a stapler. This kind of explanatory relation between accidental attributes and components of a system is what I mean by an intrinsic relation.
Certainly, staplers can have properties that are both non-functional and not intrinsically related to their parts. For example, I might have painted my stapler bright yellow. The presence of the yellow hue of my stapler has nothing to do at all, either functionally or intrinsically accidentally, with the functional components of the stapler. That stapler just so happens to be yellow, and that is a consequence of neither its functional attributes nor any necessary consequence of its attributes. Note well then that it is, prima facie, unlikely that a stapler would be yellow. That is, if all you knew about a certain state of affairs is that there is a stapler on my desk, you would not bet that the stapler is also yellow. There is nothing in the functional properties of staplers, nor their necessary accidents, that would lead us to expect a stapler to be yellow, and there are a lot of colors an office implement might be. Thus, all things being equal, it is unlikely that the stapler is yellow. No doubt, the stapler could turn out to be yellow, but we would never come to such a conclusion without having something like a direct verification of its color. In lieu of such a verification, when asked whether the stapler is yellow, we should say “Probably not” or “I wouldn’t bet on it.” In other words, since yellow is neither a functional property of staplers nor intrinsically related to any of the properties of stapler parts, it is prima facie unlikely that staplers are yellow. Notice that we have arrived at our first premise: If S is a system designed entirely for functionality and P is a non-functional property that is not intrinsically related to the parts of S, then P is probably not a property of S.
Machines are functionally designed systems. In fact, I would define a machine as an automatically operating functional system. My cherished Nespresso machine is an aggregate of parts that have been selected for their properties that contribute to the production of coffee beverages. The Nespresso machine does not only have coffee-producing properties, but also non-coffee producing properties that are intrinsically related to its designed parts, e.g., weight, tendencies to overheat or become clogged, etc. Both the former and the latter are attributes we would expect to find in a Nespresso machine simply based on its design and the intrinsic properties of the materials available for its manufacture. Of course, the Nespresso machine has some purely accidental attributes, e.g., its black and grey colors, its position on my kitchen counter, the price its sells for, etc., which are neither functional nor intrinsically related to the properties of its designed parts. Even though the Nespresso machine must have some set of accidental properties or the other, it is prima facie unlikely that it has the accidental properties it does. Computers are also automatically operating functional systems, and they are no exception to the claims I have made regarding designed systems in general. Thus, it is prima facie unlikely that a computer has any particular set of properties that are neither functional nor intrinsically related to the properties of its functional parts. Of course, a computer (just like any other machine) has such accidental attributes, but given only its design, function, and the intrinsic attributes of its functional components, it is unlikely that it has any of one of these possible attributes. For example, if all you knew about the computer I am typing this lecture on were the principles of design for its software, hardware, and user interface, and the properties of its composing parts, you would not predict that the keys on the keyboard were black nor would you expect that it would be encased in a grey plastic.
Our second premise claims that computers are systems designed entirely for functionality and consciousness is a non-functional property that is not intrinsically related to the parts of any computer. The former clause should be non-controversial, i.e., computers are functionally designed systems par excellence. We are then left to ask whether consciousness would be a non-functional property that is not intrinsically related to the functional parts of a computer. So, what are the functional properties of computers? What are computers designed for? A computer is designed to provide a discrete output (which might take the form of a display of information, a movement, a change of appearance, etc.,) for any well-formed input (which might take the form of an entry of information, the detection of a movement, etc.). For example, I can input into the calculator app on my phone “9x7” and it will output “63,” or I can tell ChatGpt to make an image of Hegel confronting a guy in a fury costume and it will output such an image. (I really did this.) Computers, as we well know, are becoming strikingly sophisticated in their capacity to process information in generative or even what some people claim to be “creative” ways, but in the end what these machines are designed to do is to provide outputs for inputs. Qualitative consciousness (and thus by implication rationality) are unnecessary for linking inputs to outputs. Nobody would’ve have thought to posit consciousness as necessary to the function of less sophisticated systems that work by algorithms, e.g., a 1980s Atari video game platform. Even though more recent algorithmic systems are much more sophisticated, there doesn’t seem to be any need to design them for consciousness in order to structure them for their function. When engineers design a computer, they are not trying to build something with consciousness, but a machine that algorithmically produces outputs in response to inputs. All that is necessary for input-output relations characteristic of computers is that their composing parts have sufficient properties to instantiate multilayer algorithms that produce appropriate outputs in response to well-formed inputs, and consciousness is unnecessary for such structures. The run of the mill physical and quantifiable properties of the metal, plastic, and silicon bits composing the computer are sufficient to account for these functions. There need be no what-it-is-like-to-be an algorithmic relation in order for inputs and outputs to be linked in appropriate ways. Whatever else we might say about computer consciousness, no computer is conscious because consciousness is part of its designed function.
Thus, if a computer is conscious, then consciousness is a non-functional property that is either intrinsically related to properties of its functional parts or it is entirely accidental. It is exceedingly unlikely that consciousness is intrinsically related to the functional parts of a computer. Consider again the non-functional but intrinsically related properties of the stapler, its shininess and disposition to pinch fingers in its chamber. Given what we know about the properties of aluminum, both commonsensically and scientifically, we would expect an object composed of this metal to be given to shininess. The properties of the metals involved in making a stapler explain this disposition, and in fact we would probably say that anything with aluminum components must be, to some degree, shinny. Moreover, we would say something similar regarding the stapler’s disposition to pinch fingers, i.e., given the properties of metals and plastics, the structural relations imposed on these materials by stapler construction, and the physiological facts about human hands, it is highly likely and maybe even physically necessary that there would be a disposition to pinch fingers in cases of careless use. No such explanation in terms of the properties of the composing parts would be available in the case of a conscious computer. The silicon, metal, and plastic composing a computer seemingly have nothing to with properties like what-it-is-like to feel cold or to taste chocolate. The molecular and structural attributes of silicon and the various metals in a computer make sense and even necessitate certain flows of electricity which instantiate various algorithmic programs and informational storage and retrieval capacities. We can readily see how those basic properties bring about such effects when these components are properly arranged, and it appears that the emergence of these computing capacities must occur under such conditions. Consciousness, however, seems to have nothing to do with those attributes. The conductivity of copper and the insulating powers of silicone get on quite well without consciousness, and there is no reason to think their coupling into a functional arrangement would cause consciousness to arise. Certainly, it would not surprise us to find out that a machine composed of silicon, metals, and plastics, were not conscious, whatever its capacities might be, because the properties of those materials do nothing to suggest consciousness. There is just nothing about ordinary physical materials that would lead us to expect something composes of them to conscious.
Of course, systems or aggregates of finer-graine things can have properties their parts lack. Many philosophers who consider these issues are rather fond of the example of the liquidity of water. Neither oxygen nor nitrogen has the novel liquid properties of H2O. Nevertheless, when oxygen and hydrogen are combined in the right proportion a universal solvent that freezes at 0°C is rendered. Maybe consciousness has a similar relation to silicone and copper as water’s liquidity has to oxygen and hydrogen, such that even though the components of a computer utterly lack qualitative awareness, a properly organized system of such entities has this attribute. There is, however, some reason to deny this analogy. In the case of water, we can readily see why the liquidity properties arise from the combination of hydrogen and oxygen. On the one hand, though neither hydrogen nor oxygen is a universal solvent, but it makes perfectly good sense as to why their atomic properties would cause such a property to emerge at a molecular level. The intrinsic properties of oxygen and hydrogen along with the structural arrangement of the water molecule offer a satisfactory explanation as to why water is a universal solvent. On the other hand, we don’t have anything like such an explanation in the supposed case of the emergence of consciousness from functionally arranged copper and silicone. We do not, and maybe cannot, see why these materials would cause what-it-is-like properties to emerge. Novel things happen at higher levels of systems, but even in such cases of emergence we do expect that something like an explanation can be given that takes us from the attributes of the parts to the attributes of the whole, and we would not have such a story to tell in the supposed case of a conscious computer.
Thus, we have our second premise, computers are systems designed entirely for functionality and consciousness is a non-functional property that is not intrinsically related to the parts of any computer, which further allows us to draw our conclusion, consciousness is probably not a property of any computer. Not so fast. I should qualify that conclusion in an important way. Our argument thus far has only shown that it is prima facie unlikely that consciousness is a property of any computer. Given what we know about the function and intrinsic properties of staplers it is on the face of it unlikely that any one of them ends up being pink or belonging to Jim Madden. Nevertheless, that is how the world just so happens to be. Whatever we might have expected based on the functions and intrinsic properties of staplers, upon inspection we might find a pink stapler or one that belongs to Jim Madden. There are truly accidental properties that something might have, which we would never expect based on its functions and intrinsic attributes alone. So, maybe upon inspection it does turn out that computers are conscious.
This objection concedes that nobody would expect computers to have consciousness while suggesting that further investigation shows that they are conscious despite those expectations. At this point, the Turning Test likely comes into play. We know that we and some other sorts of animals are conscious, and we know of certain outward behaviors that are associated with our consciousness, e.g., saying “ouch” when poked, smiling after eating chocolate, giving justifying reasons for our actions when asked to do so, etc. Thus, if computers can instantiate those same outward behaviors under similar conditions in ways that are indistinguishable from our own, one might argue that fair play demands that we grant that they too are conscious, whatever our prior expectations might have been. Certainly, a Turning Test is not infallible, because consciousness can be simulated, e.g., someone can pretend to be in pain or one can be tricked into thinking a video image is a conscious person. Nevertheless, at some point we should admit that the sophistication of speech and action exhibited by our computers may reach a level that makes mere simulation of consciousness unlikely. In other words, we have a way of inspecting for consciousness, and the current state of technological development gives us good reason conclude, given the sophistication of their outward linguistic behaviors, that some computers are probably conscious.
I am not impressed by the Turning Test, because it assumes a kind of connection between the function properties of a system and consciousness that we have very good reason to doubt, even in our case. Somewhat paradoxically, my skepticism about the Turning Test has a lot do with the success of neuroscientific explanations for our overt behaviors that are associated with consciousness. For example, take the case of Smitty’s grinning and uttering “mmmm!” after eating a piece of chocolate. I am confident that neuroscience has or will soon be able to identify a complete set of physiological causes for Smitty’s facial expression and utterance. This causal account will involve, at the crucial point, certain causal relations between neurophysiological structures and processes in the sensory-motor system which they link. Ultimately these occurrences in the nervous system will be a matter of sodium and potassium exchanging positions along membranes of lipids that compose neurons in Smitty’s brain. Moreover, it seems quite plausible, indeed I would say it is highly likely, that once we have all the data for this physical explanation, we would have a complete explanation as to why Smitty grinned and said “mmmm!” There is nothing missing from the explanation of these physical events; the neurophysiological states is sufficient to explain Smitty’s outward behavior. Notice that whether or not Smitty is conscious, i.e., there is way-that-it-is-like for him to taste chocolate, plays absolutely no role in whether we can give a sufficient causal account of his facial expression and utterance. All we need is the physiological account, and that tale can be spun to its completion without any mention of consciousness. Likewise, if we are trying to explain why Smitty replied to our question “Why do you believe P” with a certain set of justifying reasons, I am willing to bet a complete physiological story could be told that is utterly neutral as to whether Smitty is actually conscious, i.e., there is a way-it-is-like-to-be someone who holds such a belief for such reasons. Consciousness seems to be unnecessary to explain any of the behaviors and physiological processes with which it is associated. All of that is to say that our consciousness is a non-functional attribute of our physiological systems. Whatever consciousness is doing, it is not there because it is necessary for anything that we do. Name any supposed practical function of consciousness and then ask yourself whether the underlying physiological process would be sufficient to bring about the desired effects. Since these effects are always expressed through our bodily comportment in one way or another and physiological causes seem to be sufficient for all such changes, consciousness does not seem to do any of the work at all in that sense. Thus, the reason we have consciousness has nothing to do with the fact that we exhibit certain behaviors and physiological processes.
The Turning Test is then not a test for consciousness. At best a Turning Test could show that computers exhibit certain behaviors that are analogous to (even indiscernible from) behaviors associated with our consciousness. That is all well and good, but we have very good reason to conclude that the association of our consciousness with those behaviors on our part is utterly accidental. Consciousness is clearly not a necessary condition of our exhibition of those behaviors, and therefore there is no reason to think that consciousness is a necessary condition for the same activities on the part of a computer. Thus, something’s exhibiting the behaviors associated with consciousness in our case is no reason to think such a thing is conscious, because there is no intrinsic connection between those behaviors and consciousness in our case. It may well be just an odd quirk about us that we are conscious of the taste of chocolate or what it is like to perform an action for certain reasons. One might suggest that consciousness is just what you get when sodium and potassium get moving across membranes of lipids like they do in neurophysiological structures. That suggestion, however, seems no more likely than the suggestion that consciousness is just what you get when copper and silicone are arranged in a computer. That is, the properties of sodium, potassium, and lipids simply do not shed any light on why consciousness would arise. That is all to say that it is obdurately mysterious as to why we are conscious, or at least there is no explanation of consciousness available based on either the functional or intrinsic properties of physiological systems. Thus, the fact that a computer can pass a test showing that it exhibits functional powers on par with humans does not give us any reason to believe that a computer is conscious.
It is then quite improbable that computers are conscious; on the face of it we would not expect a computer to be conscious, and we have no means of inspecting a computer that would give us some good reason to believe that it just so happens to be conscious. If you ask me why computers are not conscious, I do not believe I can say why. We have no real clue as to why we are conscious in the sense that we do not have any real sense as to what within us provides the necessary conditions for consciousness – it isn’t because consciousness is necessary for our behaviors, nor is it because consciousness is intrinsically related to our physical properties. It’s a mystery. Since I don’t think we know what the necessary conditions for consciousness actually are, I cannot tell you what exactly is missing in a computer such that it is not conscious. I can’t say, “Well, if only the computer had X, then it would be conscious.” We simply don’t know the X-Factor for consciousness. All things being equal it is unlikely that computers are conscious, and we have no reason to doubt that prima facie improbability. Indeed, it is prima facie improbable that we are conscious. Yet here we are conscious beings in the act of giving and taking reasons for beliefs. We are stuck with a mystery about ourselves, and the fact that we have these attributes that seem to serve no real purposes for our survival is quite suggestive. This opens a field for metaphysical speculation as to why this known unknow X-factor has shown up in the first place. Be that as it may, the fact that we are saddled with this mystery about ourselves, a certain skepticism about consciousness in general, is why I am very confident that computers are not conscious.
If you would like to help the work I am doing in this newsletter, please consider supporting The Great Dangerous Books Podcast.
You can find my books here:
Mind, Matter, and Nature: A Thomistic Proposal for the Philosophy of Mind
Thinking about Thinking: Mind and Meaning in the Era of Techno-Nihilism
Unidentified Flying Hyperobject: UFOs, Philosophy, and the End of the World
Coming soon: Subjectivity and Its Discontent(s)


You know, I only just found this out: Turing said flat out that software can't be intelligent because it can't be telepathic 😮
🔥