Biting the Bullet of Consciousness:
Easy Problems Made Hard
by John Gregg
Copyright © John Gregg, all rights reserved

When you have eliminated the impossible, whatever remains, however improbable, must be the truth.
—Sherlock Holmes

For Audrey and Gina

Over the years I have read books written by, attended talks presented by, and argued with many smart and insightful people—too many to name here, although they know who they are. I would, however, like to thank some freelancers I hired, each of whom went above and beyond to bring this book into existence. My first-round editors, Tyler Loveless and especially Joseph Gurrola gave a close reading and often pointed comments. In addition to their specific notes, their voices in my head made the book a lot better. My next editor, Karen Francis, caught all kinds of little goofs at a stage where I thought I had things pretty well tied up (silly me.) Finally, my cover designer, Jason Anscomb, is living proof of the wisdom of the advice to authors that you really, really should not try to design your own cover.

Structural Outline
Quick Hits: the short version in bite-sized morsels


  1. The Hard Problem of Consciousness
  2. Goals, Non-Goals, and Ground Rules
  3. Physicalism: Are We Really Living in a Material World?
  4. Epiphenomenalism: Even If Consciousness Is Real, What Could It Possibly Do?
  5. Ned Block’s Turing Test Beater
  6. Can’t We Just Say That Consciousness Depends on the Higher-Level Organization of the System?
  7. Reductionism and Emergence: What Kinds of Things Are There, Really?
  8. The All-At-Onceness of Conscious Experience
  9. Time Consciousness and the Specious Present
  10. Free Will
  11. How Panpsychism Might Work
  12. Pandemonium!
  13. Cognitive Qualia
  14. Doesn’t It All Just Come Down To Information?
  15. Reference: Picking Out
  16. Reference: Turning Out
  17. Reference Internalized
  18. Conclusion
About the Author
email: john <at>
Follow @JohnRGregg3


The Hard Problem of Consciousness

The very first bullet we have to bite is the Hard Problem of consciousness. In his book, The Conscious Mind (1996), the Australian philosopher David Chalmers argued that there was a big important distinction between the “easy problems” of cognition (ability to reason, remember, evaluate, report on internal states, etc.) and the “Hard Problem” of subjective consciousness. Chalmers was careful to point out that “easy” and “hard” are relative terms, and that even the easy problems might take us a few centuries to work out. The easy problems are, however, tractable using the methods of modern science. Once our neuronal probes are subtle enough, and we have a higher-level framework stated in terms of information processing or some other abstractions to help us make sense of what the neuronal probes tell us, we should be able to answer any question we might have about how the brain works. Any question but one.

The Hard Problem is the fact that you will never be able to tell me a story about information processing, computation, biochemistry, neurons, sodium channels, or anything like that, that will come close to explaining what red actually looks like, or why red looks red to me, or why middle C on a piano sounds just the way it does. These basic ineffable sensations are called qualia (singular quale) in the literature of philosophy of mind. Subjective consciousness itself is sometimes characterized at the most basic what-it-is-like to be you or to have some sensation or another. The Hard Problem of subjective consciousness (or as philosophers like to call it, phenomenal consciousness) is hard because it just does not seem amenable to the sort of analysis that modern science knows how to do.

Descartes thought that there were two kinds of stuff in the universe: physical stuff and mental stuff. For this reason, he has forever been called a dualist. In modern times, people who wonder seriously about qualia are also called dualists, even though many of them explicitly reject the idea of there being two fundamental kinds of stuff. This misleading labeling is unfortunate. Philosophy is confusing enough without calling things by incorrect names. Moreover, in recent centuries pureblooded dualists have been spotted in the wild very rarely, and the term is sometimes used somewhat pejoratively: more people accuse others of harboring dualist sympathies than embrace the term for themselves. For these reasons, I will use the term qualophile to describe Chalmers and his ilk.

The Objectivity of the Subjective

The entire universe and everything in it is made up of atoms and molecules and photons and things like that, all interacting according to the laws of physics. The claim of the Hard Problem is that (a) the redness of red as it appears to me is an absolute, objective fact of the universe, and (b) no account of atoms and molecules interacting, no matter the complexity of their interactions, will predict or explain the redness of red as it appears to me.

What if someone modeled our minds artificially, and created a robot that to all appearances was intelligent, and even conscious? It might claim to see red, and it might do so in very convincing terms. It might represent red in some sophisticated way to an internal self-model in a way that mimicked some neural or informational events in our own brains as we see red. It might compose a poem about the beauty of a sunset that would move you to tears. Nevertheless, we have no principled reason to believe that it really is experiencing red the way we do.

If this is true of a robot or artificial intelligence (AI) that models our own mental processes at some appropriate level of abstraction, it is also true of our own brains themselves. Our brains, as understood by neuroscience, are, after all, biochemical computers of some kind. At the low levels, they can be modeled physically, and perform the same kind of information processing that a computer might. The distinction between squishy brains on the one hand and silicon chips on the other is an implementation detail as far as this line of thinking goes.

While we might have a perfect and precise description of the causal chain between photons of a particular frequency striking our retinas in a certain pattern on one end, and our uttering “What a beautiful sunset!” on the other, this description (however finely detailed) will not explain what red actually looks like to you or me. Just as silicon, flipping bits, will never see red, we have no principled reason to derive the fact of our seeing red from the bit flipping in our own neurons. If an AI can’t do it, we have no good reason to think a brain could either. But of course our brains do see the red sunset. This leads the qualophile to bite the bullet of claiming that there is something big missing from the way we talk about and analyze brains, and pretty much everything else in the physical world as well.

People have come up with clever thought experiments to help skeptics arrive at the conclusion that the Hard Problem exists and that we should take it seriously. One of the most famous is the one invented by Frank Jackson (1986) in his essay “What Mary Didn’t Know.”

Mary in Her Black and White Room

Imagine Mary, a supergenius particle physicist/neuroscientist, in a future world in which our understanding of physics and neurobiology is complete and perfect. She understands and has mapped out every single neural pathway, electro-chemical reaction, and quantum wiggle in her own brain. Mary, however, has been raised in an entirely black and white environment. She has never seen anything red, for instance. She knows exactly what the physics of photons of red light are, and she can predict exactly how she would react behaviorally if she did see something red, but she has never actually experienced it directly. If you have ever debugged a computer program in C, for example, using a debugger, in which you single-step through your code line by line, you may get a sense of the way in which Mary understands her own predicted reaction to seeing a red apple. She can “walk through the code” perfectly, but she has never experienced red.

Now imagine that Mary gets let out of her black and white room, and sees a red apple. For all her functional, scientific knowledge, perfect and complete as it was (in terms of making 100% accurate predictions of her own behavior, both macro and micro), something entirely new happens in her head when she sees that apple. As the title of the article (“What Mary Didn’t Know”) suggests, this new thing that happens in Mary’s head is usually framed in terms of knowledge, and some people counter that she does not actually gain any new knowledge upon seeing red for the first time, but (merely?) acquires a new ability. However you characterize it, something new happens to Mary, something that her schematic descriptions of her own brain never allowed her to anticipate.

The point here is that if you think of the brain as a big information processor, even being as generous as your wildest dreams will let you in terms of its sheer processing capacity, future physics, etc. you still leave something out. It counts pixel values on its retinal grid, it accesses memory locations, it does data smoothing and runs comparisons, it takes different execution paths based on its evaluations and invokes modules. Perhaps when thought of in a certain way, from the point of view of a certain level of abstraction (projected onto the system by the observer), the information processor may be seen as seeing red, but there is no reason to believe that it really is seeing red, objectively, the way I (and presumably you) do.

Nagel’s Bat

Another illustrative example comes from Thomas Nagel’s (1974) essay, “What Is It Like to Be a Bat?” Bats employ a sonar-like echolocation trick to find bugs in the air. The claim is that there is nothing you could possibly ever know about how a bat’s brain, ears, and vocal system work that would convey what it is like to sense a moth 20 feet away: kind of like hearing, but not really; kind of like touching with a long arm, but not really.

Similarly, I have read that bees see colors that we cannot see. What do those colors look like? We could know everything about bee brains and bee eyes, how the bees react to those colors and why, how the ability to see those extra colors evolved, etc., and we would still never know personally what those colors looked like. If all mental activity is information processing, how is it that we could have all the explicit, articulatable information about bee perception but still not know something about it? Couldn’t we, with our far superior brains, crunch through the bee color perception algorithm? Couldn’t we “walk through the code”? Most people would agree that such an exercise would not deliver a sense of what bee colors actually look like to the bee.


This point is illustrated by another thought experiment, that of the notion of a zombie. A zombie, in this context, is basically a person who has no phenomenal consciousness—that is, who experiences no qualia, but whose brain and cognitive machinery otherwise work just fine. A zombie has the same neural connections that you do, acts and talks like a normal person, and for the same physical reasons, but is “blank inside.” A zombie brain is a human brain, but considered only as an information processor. To all outward appearances a zombie is a regular person. A zombie would claim to see red, and seem to fall in love, and if so inclined, would write that poem about the sunset, and would in fact do all the things with its brain that we do with ours, producing all the same reactions. Nevertheless, there is no “what it’s like” to be a zombie.

The zombie thought experiment is controversial. By hypothesis, my zombie twin and I are functionally identical, which means physically identical, down to the last neuron. On the face of it, it’s a pretty bold claim that one of us is fully conscious and the other “blank inside”. Naturally, there are some people who think that the whole notion of zombies is incoherent. If something talks, thinks (if by “thinking” we mean only the sort of processing that could be modeled on a computer, the pure information processing manifested in us by our neural firings), and acts like a conscious person, then that entity is conscious, period.

If you define “belief” in strictly information-processing terms, then the zombie believes that it is conscious. To speculate about the conceivability of something that talks, thinks (in the limited way mentioned above) and acts like a person, but is not conscious, is like speculating on the conceivability of married bachelors. There is nothing extra about consciousness besides the functional mechanisms of information processing, and any claims to the contrary are just spooky mumbo-jumbo, the products of sloppy thinking. To critics, it is as if someone hypothesized an atom-for-atom copy of a water fountain, one that behaved exactly like the original water fountain, but just wasn’t, you know, a water fountain.

Zombies make sense to me, though. Given our current understanding of brains, there is nothing inconsistent about the idea of a brain that works exactly as mine does now, producing the same output responses to the same input stimuli, and employing the same neural mechanisms, but which skips the phenomenal consciousness part. As nutty as it sounds that something physically identical to me could nevertheless be so different in its mental life, nothing we know disallows it. No discovery by neuroscientists, and no new cognitive model, will connect the dots for us. We do not have any principled, theoretical way (other than brute correlation at a higher level than we generally like our brute correlations) to get from a complete description of how the parts of the brain function to the fact of phenomenal consciousness. A failure of entailment of this sort should concern, if not embarrass, us. We have work to do. With regard to the Hard Problem, this failure of entailment from the facts about brain processing to the facts about consciousness has been called the explanatory gap. As Ned Block (2002) put it, “Why couldn’t there be brains functionally or physiologically just like ours…whose owners’ experience was different from ours or who had no experience at all? (Note that I don’t say that there could be such brains. I just want to know why not.) No one has a clue how to answer these questions” [emphasis original].

While it is often hard to draw a distinct line between qualia and cognitive, functional information processing (a fact which is, I believe, underexplored—but more on this later), there is something going on when I see red that is unexplainable by any theory of mentation that allows for minds being implemented by computers. It stands as an extra fact about the universe that demands explanation. To define consciousness as functional information processing is to define away the central mystery of consciousness.

Without going into too much detail now, I want to say that it is easy to assume that just because my zombie twin has the same physiological makeup I do, and the same internal causal dynamics, and processes information the way I do, it therefore thinks, knows, believes, etc., the way I do as well. We should be careful here, since this assumption begs an important question. It assumes that what is important to us about things like thinking, believing, knowing, etc., can be definitely explained in zombie terms: that is, in terms of information processing and causal dynamics, without regard to qualia at all. I would not take that bet.

When the power comes back on in my house after a power failure, my microwave believes that it is noon. It tells me so. But it is a simple causal mechanism. It doesn’t really “believe” things, it just does, like a rock you drop, hitting the ground with a thud. Adding a lot of fussy data structures and communication paths to the microwave clock might complicate matters to the point where it seems like we may enter the realm of “belief,” but however daunting the details may be, it’s still a clackety-clack causal mechanism.

Not Everybody Likes the Hard Problem

I think it is fair to say that qualophilia is still a minority position in philosophy, and certainly in the hard sciences that touch upon these questions at all. The mainstream orthodoxy, such as it is in these circles, is…the other folks. There are a lot of people who think that all this qualia talk is nonsense, or at least misguided: even if whatever it is we call “qualia” is real, it can be explained with “normal” physics, information processing, etc. and has no broader implications for our picture of what the world is made of or how it is put together.

What should we call these people? Since I’ve called their opponents qualophiles, perhaps they should be qualophobes? I’m going to bow to convention in this case, though, and just call them physicalists (although I will use the term materialist somewhat interchangeably). Even this is a little misleading, or at least vague. It does justice to the idea that “it’s all just physics”, but it leaves open what we mean by that. Lots of qualophiles might agree that the universe is just made of physics—it’s just that there might be more to physics than you think.

I do not usually put a lot of stock in sociology of science, nor do I like to emphasize the cultural aspects of scientific endeavor, but what science is, its proper aims and methods, is a lot less monolithic than most people believe. We must be open-minded as we consider the kinds of methods we might have to use to explore whatever facts about the world nature sees fit to present us with. Each scientific revolution (or, as the cool kids say, “paradigm shift”) leaves us perfectly equipped to ask those questions that have just been answered.

The 20th century was especially humbling in this regard. First special relativity, then general relativity, and most dramatically, quantum mechanics force all but the most blinkered of investigators to ask the big questions of science itself as a practice: what counts as an explanation? Is there a difference between getting a right answer and The Truth? How do we know when we are done?

The fact that we don’t know how to properly frame certain questions now is not an argument that the questions themselves are wrong—quite the contrary. It is the questions we aren’t sure even how to ask (in a precise, falsifiable, quantitative way) that should interest us the most. We should beware of the hubris of thinking that even if our particular scientific theories are incomplete, our ways of framing them, and our criteria for what things are worthy of scientific consideration, and the form we like our answers to take, are complete and perfect. We should not fall into the trap of thinking that if someone can’t quite pose their question in terms that our framework is designed to accommodate, this means their question is automatically silly.

As we look around our universe and try to make sense of it (in the loosest possible sense), we should be open minded about the kinds of things we find, and the kinds of explanations of those things that we accept. We should be a bit humble about what we have figured out so far, including our ways of figuring out themselves. This is not to say we should be indiscriminate, and entertain every cockeyed notion that we come across, but when presented with something that just does not seem to fit into our established frameworks, we should not be squeamish about poking at those frameworks and seeing if we can’t extend them a bit. We should be bold, but soberly so.

Is Consciousness Like Elan Vital?

Sometimes physicalists compare the belief that the Hard Problem is hard to the vitalism of centuries past. This was the belief that there was some mysterious elan vital, a life force that animated living things beyond the mere mechanisms of locomotion, eating, reproduction, etc. The more we found out about how life worked at a molecular level, however, the less anyone believed in an elan vital. Belief in vitalism was ultimately exposed as a failure to appreciate how beautifully complex and exquisitely specific the mechanisms of life were. Once one understood the mechanisms, however, there was nothing left to explain. Similarly, argue the physicalists, once we understand enough of the cognitive mechanisms of the brain, the Hard Problem will melt away into the details.

Anil Seth (2022) put it fairly well:

Briefly, the vitalist notion that life could not be explained in terms of biophysical mechanisms was neither directly solved (by finding the elusive ‘spark of life’) nor eradicated (by discovering that life does not exist). It was dissolved when biologists stopped treating life as one big scary mystery, and instead started accounting for (i.e. explaining, predicting, and controlling) the properties of living systems (reproduction, homeostasis, and so on) in terms of physical and chemical processes. We still don’t understand everything about life, but what seemed at one time beyond the reach of materialism no longer does. By analogy, the fact that consciousness seems hard-problem mysterious now, with the tools and concepts we have now, does not mean it will always seem hard-problem mysterious—and the best way forward is to build the sturdiest explanatory bridges that we can, and see how far we get.

Elsewhere, Seth (2021) says:

…today’s consciousness researchers may be in a situation similar to that facing biologists, studying the nature of life, just a few generations ago. What counts as mysterious now may not always count as mysterious. As we get on with explaining the various properties of consciousness in terms of their underlying mechanisms, perhaps the fundamental mystery of ‘how consciousness happens’ will fade away, just as the mystery of ‘what is life’ also faded away.

Seth’s admonishment is sobering, but as mysterious as life was at one time, there never was anything about it that wasn’t basically functional. It always was, in principle, explainable in terms of causal mechanisms, but it strained the imagination (at the time!) that those mechanisms could possibly be that small, or that complex. I have faith that if I could talk for 45 minutes to an otherwise scientifically minded vitalist from centuries past (“Hi, don’t be afraid, I’m from the future…”) I could dissuade them of their vitalism. There isn’t a conceptual leap, just a lot of orders of magnitude. It’s all mechanism, all the way down, just really, really really tiny. This just isn’t the case with phenomenal consciousness. There isn’t any toehold for naked causal bonkings to produce anything like it.

Gottfried Leibniz, in fact, made this point in his famous quote about the mind-as-mill:

Supposing there were a machine, so constructed as to think, feel, and have perception, it might be conceived as increased in size, while keeping the same proportions, so that one might go into it as into a mill. That being so, we should, on examining its interior, find only parts which work one upon another, and never anything by which to explain a perception. Thus it is in a simple substance, and not in a compound or in a machine, that perception must be sought for.

Moreover, phenomenal consciousness (or qualia) is not something we drag into the picture to explain something or other that we observe, as elan vital was invoked to explain what we observe about life, or to use another example physicalists like, as the luminiferous ether was invoked to explain light waves in space in the 19th century. Consciousness is the raw data, the observed thing that needs explaining. It is the light, not the luminiferous ether.

Is Consciousness an Illusion?

Some people argue that what I call subjective consciousness is some kind of illusion. But what is an illusion? It is something that seems one way but is really another. My claims rest on the observation that red really seems red to me. The counterclaim that this is an illusion boils down to, “Red doesn’t really seem red, it only seems that it seems red.” But seeming, like multiplying by 1, is idempotent—inserting more “seeming” clauses into my claim does not change it one bit. Whether red seems red, or seems that it seems that it seems that it seems…red, the Hard Problem stands before us.

The Hard Problem consists of the fact that anything seems like anything at all. If phenomenal consciousness is an illusion, then who or what exactly is the victim of that illusion, and how can it be such a victim without the Hard Problem being a problem for it? The claim that qualitative, phenomenal consciousness is an illusion begs the question (in the pedantic sense of implicitly assuming that which it purports to prove). There is a fundamental bootstrapping problem (you can’t pull yourself up by your own bootstraps). The problem of seemings is not resolved by showing that something seems like X but is really Y. The mystery is that anything seems like anything. How do you build seemings of any kind out of a universe made entirely of blind, stupid, amnesiac particles, however they are arranged?

Keith Frankish (2017, 2022) is a proponent of what he calls illusionism, which basically says exactly this: consciousness, as I characterize it, at least insofar as it is mysterious and most interesting, is an illusion. His account of how the mind works this illusion on itself resembles a lot of higher-order thought (HOT) theories. He claims that while a lot of fancy brain processing goes on under the hood (the lower order thoughts), the mind represents all this to itself as some kind of deeply mysterious, fundamental, ineffable, qualitative experience. It is this representation to itself that I think is analogous to the higher-order thought, and which Frankish says is actually a misrepresentation, albeit a very convincing one, for perfectly good evolutionary reasons.

As with a lot of physicalist arguments, I think “representation” (and its variants and synonyms) is doing a lot of work here. If you define your processing and processors and modules functionally, causally, there is no representation or misrepresentation. Things just clatter along, doing what they do. To say that some part of the mind is the victim of a misrepresentation is fanciful and poetic language. Qualia do not seem ineffable to such a system, because nothing seems like anything. If your account of consciousness rests on A (mis)representing B, you’d better have an ironclad account of representation in the first place. Personally, I don’t have such an account, or at least one that leaves “representation” with any explanatory power. But more on this later.

Is Qualophilia a Failure of Imagination?

It is sometimes said that taking the Hard Problem seriously is a simple failure of imagination: the fact that I could not imagine traditional science (neurobiology, information theory, physics) explaining what it is like to see red says a lot more about my powers of imagination than it does about the actual limitations of traditional science. In the same way, it is argued, a vitalist’s inability to imagine life being nothing more than molecular processes simply proved to be a failure on the vitalist’s part to appreciate just how complex and tiny the molecular processes are. The vitalist’s skepticism, however, ultimately came down to a matter of scale and complexity—the vitalists did not properly appreciate that the components of life could be quite that small or that complex. Claiming that more scale and complexity will turn ones and zeros (or their neural equivalents) into red is a non-starter.

To a physicalist, the fundamental components of the world are completely blind to one another, and completely stupid, and have no memory whatsoever. The basic particles just careen in one direction, then another. Even when they attract, repel, or collide with each other, they don’t really “see” or “know” about each other—they just careen (with an occasional bonk). They don’t know why, or what it is that is influencing them to careen in this particular direction at this particular speed. It sounds funny even to say it this way, but I think some people do not really sense in their guts just how blind, just how stupid, just how little memory the fundamental particles must have to a committed physicalist. To get anything not blind and not stupid out of them, you must attribute a lot of power to the notion of “levels of organization”. You can get a great deal of causal complexity out of such levels, and systems can be designed (or evolve) to do many tricky things. It is a real stretch, however, to claim that the redness of red is among them.

My accusation to physicalists is that they do not follow through on their own commitments in a rigorous and thorough way. They claim to be strict vegans about woo—qualitative subjective intuitions—but they help themselves to generous portions when it suits them. They frame theories of consciousness in terms that draw on a whole lot of pretheoretical notions that just aren’t necessarily there in the neurons, bits, bytes, quarks, and photons. We hear, for example, about systems representing stuff (perhaps including a self-model) to themselves, you see mentions of integrated systems, the system as a whole and the like. This shift of the explanatory burden is often accompanied by some kind of examination of the notions in question, but sometimes this is little more than a hand-wave.

The physicalist’s position is an extravagant one, a point which is often overlooked simply because physicalism has been the reigning orthodoxy for several centuries now. The physicalists claim that if you get enough unconscious stuff together in a big pile, and arrange the pile in a certain special way (a complex enough way, perhaps, or a pile that conforms to a certain functional schematic), then subjective consciousness will appear.

Because centuries of scientific advances have shown us that the reductive physicalist approach is the right framework for understanding the universe, it simply must be the case that it is adequate to explain consciousness too. This particular alchemy is a leap of faith on the physicalists’ part, and the onus is on them to show us the money. It is foul play to try to shift the burden of proof back on the qualophiles, claiming that skepticism of the reductive physicalist position betrays some kind of failure of imagination.

Moreover, it is not a failure of imagination that leads me to take the Hard Problem seriously. On the contrary, it is because I can imagine a day not too far off (fifty years? One hundred?) on which we solve Chalmers’s easy problems. On that day, cognitive science and neurobiology will complete their intended programs and actually map every single event in the human brain, every information flow at any level of organization you please, every secretion and uptake of every neurotransmitter. On this day, it will be possible for us (like Mary in her black and white room) to detail everything that happens between photons striking my retina and my uttering, “What a beautiful sunset!” The cognitive scientists and neurobiologists will smile for the cameras, collect their Nobel prizes and go home satisfied, and all the headlines will blare “Mysteries of Consciousness—Solved At Last!” and nothing in their description of the brain will give the slightest hint of what it is like to see red, or why anything seems like anything at all.

Yes, it is true that I cannot imagine that day in detail, in the sense that I do not have that final theory at my finger tips down to the last synapse (otherwise I would be the one collecting the Nobel prize right now), and there’s the rub, the physicalists would say. If I could see that theory in detail, they argue, it would be clear why red seems like red.

For nearly a century, mentioning consciousness was a career killer in the field of academic philosophy. In the last generation or so, however, the question of consciousness has been coming up with greater and greater urgency, and it is attracting pretty level-headed, math/science type people—not mystics, not new-agers, not religious wishful thinkers. I think this is so precisely for the reasons that I mentioned above: as science progresses, and closes in on its stated goals regarding our brains, its limitations stand out in ever sharper relief.

The physical sciences, as their boundaries of inquiry are currently construed, deal only in functional behavior, externally measurable effects. There are perfectly valid questions about nature (what is it like to see red?) that are outside the bounds of natural science as currently practiced. That is, it is conceivable that we could have a complete and perfect understanding of physics and all the other hard sciences, and never quantitatively articulate, let alone answer, those questions. My ability to imagine this state of affairs may be incorrect in some way, but it certainly does not represent a failure of imagination on my part.

My seeing of red is not a philosophy; it is not a way of thinking about or interpreting some theory or idea; it is not a bit of linguistic sophistry; it is not an abstraction; it is not an inference I have drawn or some metaphysical gloss I have put over reality. It is a brute fact about the universe, a fact of nature. It is really, really there. It is explanandum, not explanation. As such, it is incumbent upon our natural science to explain it. If my seeing of red is not amenable to the currently accepted methods of natural science, then so much the worse for those currently accepted methods. People who deny the existence of qualitative consciousness and its implications remind me of the church officials who refused to look through Galileo’s telescope because they did not want their neat and tidy theological world upset by what they might see.

That said, I don’t want to be coy about the fact that ultimately what I am saying here rests on a big fat intuition. In some circles, “intuition” is a dirty word, but in this case it’s an intuition I’ll stand by. I’d like to get you to stand by it too, but for now I’ll settle for getting you to admit that you stand by some “intuitions.” What if I ask you how you know you are conscious at all, in any sense whatsoever? Never mind if the redness of red is some kind of mysterious qualitative gloop or “mere” information processing, or a pattern of synapses firing, how do you know you have any thoughts at all? For any answer you give, I can respond like a tiresomely precocious child: “But how do you know?” At some point, you just have to say (perhaps with some annoyance) “I just do, OK??” After a few iterations like this, the premise of “I think, therefore I am” is solid as a rock, and no one seriously doubts it.

What Could There Be Besides “Normal” Physics?

Physics and physicalism are not so much wrong (except in their claims of exclusivity) as they are incomplete. This is just the way science works. Newton invented a formal basis for a physics and for a long time it seemed dead accurate. But along comes Einstein, and it turns out that while Newton’s physics was perfectly consistent and accurate within its domain, it was incomplete—it is merely a special case of a more general set of laws. Then, a decade later, Einstein comes out with general relativity, and shows that his own earlier work, while perfectly applicable within its proper domain, is really just a special case of still more general laws (hence “general” vs. “special” relativity).

Science works by adding more layers to the outside of the onion. Old theories are not so often disproved by new ones as they are generalized and subsumed by them. When we finally take the Hard Problem seriously, it will usher in a true scientific revolution. This will not simply be a matter of surprising new results, like room-temperature superconductors, but a rethinking of what questions we ask and how we ask them, in much the way quantum mechanics forced a rethink.

Assuming we are willing to bite the bullet of admitting that there is something we can’t explain going on when we see red (even if we can’t even articulate the question in a scientifically respectable way), where do we go from here? What kind of layer can we add to the outside of the onion? There is no strictly conservative way out of this mess. We want the least weird description of what the universe would have to be like for beings like us to be in it. If there must be weirdness at all, we have to confront it head-on, bracket it, constrain it, and characterize it in some way that allows us to keep all the wonderful stuff we’ve already figured out.

The way the hard sciences break the world down into the bonkings of blind, stupid, amnesiac particles just can’t explain what we need to explain. We have to rethink how we talk about the lowest levels of reality in a way that doesn’t throw the baby out with the bathwater, and keeps all the physics we already know intact, but adds a way for consciousness to exist. Loopy as it sounds, consciousness, or something that scales up to consciousness in certain kinds of systems, must be built in at the ground floor, as part of the fundamental furniture of the universe.

Someday, after we have pinned it down a bit, it will stand right up there with mass, charge, and spin. This view is traditionally called panpsychism, but some people prefer pan-protopsychism to emphasize that it is not consciousness as we know it that stands as a fundamental building block of the universe, but some tiny crumb or spark that, when scaled up, aggregates into full-blown human consciousness (albeit perhaps only under certain conditions or in certain types of systems).

Also, “panpsychism”, to some people has medieval, vitalist connotations; most contemporary panpsychists want to dissociate themselves from the belief that “rocks think”. No one knows (yet) the principles according to which proto-consciousness, if such a thing exists, might aggregate into full-blown human consciousness, or what is so special about brains that they support this aggregation. In the range of potential answers to these questions there is room for different versions of panpsychism, some more outlandish than others.

At this point, I’d like to quote Philip Goff’s Galileo’s Error (2019), since he sums up the gist of panpsychism quite nicely:

Panpsychism is the view that consciousness is a fundamental and ubiquitous feature of physical reality. This view is much misunderstood. Drawing on the literal meaning of the term—“pan” = everything, “psyche” = mind—it is sometimes supposed that panpsychists believe that all kinds of inanimate objects have rich conscious lives: that your socks, for example, may currently be going through a troubling period of existential angst.

This way of understanding panpsychism is wrong on two counts. Firstly, panpsychists tend not to think that literally everything is conscious. They believe that the fundamental constituents of the physical world are conscious, but they need not believe that every random arrangement of conscious particles results in something that is conscious in its own right. Most panpsychists will deny that your socks are conscious, while asserting that they are ultimately composed of things that are conscious.

Secondly, and perhaps more importantly, panpsychists do not believe that consciousness like ours is everywhere. The complex thoughts and emotions enjoyed by human beings are the result of millions of years of evolution by natural selection, and it is clear that nothing of this kind is had by individual particles. If electrons have experience, then it is of some unimaginably simple form.

So Goff and I want to reconfigure the metaphysics of physical reality in response to an intuition we have about the taste of ice cream? Well, yes—and in the chapters to come, I hope to make some version of this seemingly extravagant hypothesis a bit more palatable, and even inevitable.


Goals, Non-Goals, and Ground Rules

Philosophers sometimes argue past each other, to the point where it’s hard to tell if or where they even disagree with each other. In the interests of clarity, I think it is a good idea to describe the sorts of theories and explanations I’m interested in, and the kinds I am skeptical of.

Folk Usage vs. Real Definitions

You Can’t Even Define Your Terms!

For starters, sometimes physicalists hold it against qualophiles that they don’t even define consciousness or qualia. I plead guilty to that. We have a mysterious phenomenon. We can point to it, and try to approach it, and start to say some things about it, or we can deny that it exists. What we can’t do is define it (yet). That’s just how science (or inquiry more broadly) works. Defining what you are talking about is the capstone of the pyramid, the very last thing you do. Isaac Newton said some very intelligent, perceptive, and true things about light, but he was centuries away from defining it.

While we should allow for the fact that we can’t precisely define the thing we are trying to explain or understand (at least not at first), we should be as clear as possible in the terms we use in the explanation of that thing. A lot of philosophers are glib in their use of terms like information, computation, symbol, represent, and even physics. If you argue that consciousness can be explained by any of those things, you had better be ready to tell me what you mean by them. You should also be able to tell me what explanatory power you get from characterizing reality in those terms.

Moreover, as we learn and theorize more about something we are interested in, like light or consciousness, we may be able to characterize it more precisely than we could before, when all we could do was point to it. There’s a catch, though. What if we find some underlying constitution or structure of the thing we are interested in that really seems to explain a lot about it, even comes close to defining it, but does not completely line up with what we were pointing at originally? That is, our new way of characterizing the thing, with a stronger theoretical basis, includes some stuff that we didn’t use to think of as examples of that thing, or maybe excludes other things that we used to think of as examples of that thing.

The classic example of this is the folk conception of fish. Eventually we redefined “fish” based on internal anatomy, which in turn is based on evolutionary history. We decided that “fish” includes some very unfishlike things that scuttle along the ocean floor, but excludes whales and dolphins. The creatures that count as fish to us constitute a different set than those that would count as fish to a medieval person. As we get our theoretical feet under us, we should expect this kind of thing. We will want to redefine terms in ways that vary somewhat from our pretheoretical “folk” understandings of what those terms used to mean. This entails judgment calls as we discover and theorize: when do you nudge the definition of a term over a bit, and when have your new categories caused you to diverge so much from prior usage that you should just coin a whole new term to talk about what you mean, and leave the old term to the folk to use in everyday life?

As you make these judgment calls, there are two things you definitely should not do, however. First, you should not decide that all the ignorant pretheoretical people were wrong in their use of the term. They were happy calling whales fish. You redefined it for your own purposes, and that’s fine, but they were not making an incorrect claim about the world in their “misuse” of the term fish. They just had a different definition.

Second, you should not err in the other direction, letting folk usage dictate the kinds of theories you entertain. If folk intuitions about how the world works were counted as definitive evidence against an otherwise compelling theory, we would never have figured out that the Earth goes around the Sun instead of the other way around. By the same token, if a philosopher decides, for example, that knowledge is justified true belief, and they go on to develop a theory based on that definition, the fact that some clever person comes up with a “counterexample” that shows that the theory violates folk intuition should not count against it (take a look at the Gettier problem for an example of this kind of reasoning, if you are interested).

Unless, of course, your aim is a precise and elaborate articulation of folk intuition, which, following the Earth/Sun analogy, makes you Ptolemy, not Copernicus. You are providing an elaborate reflection of peoples’ pretheoretical intuitions and handing them back to them rather than figuring out what is really going on.

I have no interest in the project of writing a perfect descriptivist dictionary. When philosophizing about X, I don’t want to come up with a perfectly worded, concise listing of the 17 ways in which people in the street talk about X, or even a single perfect formulation that exactly captures common usage of “X” with no remainder. For the most part, common usage is interesting insofar as it points to some actual thing or process of fact in nature that we should be exploring.

Is a Hot Dog a Sandwich?

Philosophers love to define terms. It is said that a philosopher would rather use another philosopher’s toothbrush than use their terminology. Many debates are not so much about what is true or not true, but about how we should define and use terms. For instance, philosophers worry a lot about meaning. One of the divisions in all that worry is between those who are internalists about meaning and those who are externalists about meaning. (Don’t worry about what this means (har!). More later.) It bothers me when people phrase this as the question of whether externalism or internalism is “true.” They aren’t the sorts of things that are true or false. The argument comes down to how you chose to define “meaning,” and not what the meaning really is.

We have theories of gravitation and electron orbitals, theories that are explicitly confirmed or falsified by experiments. Appropriating the terminology of the hard sciences in this way is scientism. Instead of saying “[my favorite ism] is a theory about meaning and it is true”, we should say something like: “I choose to characterize meaning in [my favorite way]. This may clash with some of our common intuitions, but will be consistent enough with others that I don’t think I am twisting usage too far out of shape. Moreover, thinking of meaning in these terms, I hope you will agree, allows us to reach some other insights and sheds surprising light on other issues…” If you are into this kind of literature, you might know that there is an overlapping debate as to whether mental content is narrow or broad. Jeez, I don’t know—it depends on how you define content, assuming we even try to do so.

My point here is not (for the moment) to argue for or against any particular ism, but to make the somewhat higher-level statement that these are judgment calls. My own inclination is to coin our terms in such a way as to respect the really-there things in nature, to use “element” to talk about things like hydrogen, helium, and lithium, and not air or water. Just as I would rather be Copernicus than Ptolemy when it comes to respecting pretheoretical intuitions, I’d rather be Mendeleev than Aristotle when it comes to what we call elements. We want to carve nature at the joints conceptually, and then we want to speak as clearly as possible to convey those conceptual carvings.

Wittgenstein famously said that what we cannot speak about we must pass over in silence. My gloss on that is what we don’t quite understand, we must be vague about. I play a bit fast and loose with my own terminology, and I like to think that this is not mere sloppiness on my part. Premature hair-splitting is actually harmful. It encourages us to think, wrongly, that we are at a more refined stage of our inquiry than we are. We must remind ourselves to think broadly, boldly, and openly. When it comes to consciousness, we are still painting with broad strokes, maybe with a palette knife, or even a roller. We should not be pretending to use the fine watercolor brush.

Caricatures of Some Physicalist Arguments

The following caricatures are, admittedly, straw men, at least to some extent. They are not far off, however, from actual arguments that I see in one form or another pretty often. By satirizing them here, I hope to illustrate what I find wrong with the more subtle versions of them.


Sometimes people point out that consciousness has survival benefits: it is a way of integrating information about the world and formulating intentions and instigating actions that help us. These accounts tend to focus on Chalmers’s “easy problems” and thus miss the thing about consciousness that makes it so tricky.

Imagine that Charles Darwin, on his voyage on the Beagle, came across an island in the Pacific Ocean that had a peculiar ecosystem. The island was inhabited by slow-moving, fluffy, fat rodent creatures that nibbled on the grass that grew in abundance there. The island also had sharp-toothed predators, who would lie in wait for the rodents to come by. Every time one of the toothy predators sprung, though, the rodent it was stalking would levitate and hover in the air, while the predator paced below. Eventually the predator would skulk away, and the rodent would gently float back down to earth.

What would Darwin say about this? He might say that the ability to levitate saved the rodent, and that the rodent species evolved this ability because it enables them to escape being eaten by the predatory species, thus making them fitter to survive in their environment. He would have to be fantastically incurious, however, if that were all he said. However advantageous the ability to levitate might be, and however neatly this advantage fits into his theory of natural selection, Darwin, one hopes, would immediately be moved to ask questions in an entirely different realm of inquiry. How is levitation possible in the first place?

Big Dump Truck. Really Big Dump Truck.

What if someone told you they had figured out the mystery of consciousness, at least in part. Among other things, perhaps, a big dump truck, according to their hypothesis, would be conscious. The catch is that it would have to be a really big dump truck, like planetary sized. At least as big as the Moon, maybe bigger than the Earth (haven’t worked out all the variables yet). Obviously, we are in no position to build such a thing, but if we could, it would definitely be conscious.

If someone excitedly explained this theory to you, you might be a bit skeptical. You might express doubt, or ask questions about the details, or ask why it should be that a huge dump truck is conscious. In response, imagine that your friend confidently told you that your imagination was too limited, that you were holding on to some ascientific prejudice or vanity, and that if you could really, really conceive of the size of this dump truck, even you would see that it just was conscious. It would have to be. You just aren’t trying hard enough. You just aren’t understanding how big a dump truck we’re talking about here. Maybe you aren’t applying the right concepts to the dump truck, or considering it under the correct mode of presentation.

Size isn’t the problem, and dump trucks have nothing to do with consciousness at all. I don’t have to calculate the gravitational field generated by each enormous lug nut on each continent-sized wheel to tell you that you are not going to make a dump truck conscious by just making it big. There is no connection between the two properties. While I acknowledge that the picture is a little more muddled when it comes to the distinction between phenomenal consciousness and, say, information processing, this is kind of how I feel whenever someone tells me that consciousness “just is” information processing, but really, really complex information processing, or data structures arranged in a certain way, or self-modifying self-models, or something like that. You just can’t get there from here, and fleshing out the details won’t help you. It’s just not the kind of thing that could ever build up to consciousness, no matter how much of it you pile on.

Correlation vs. Entailment

What if, every time I turned on my kitchen light switch, the neighbor’s dog barked. Let us say that I tried this a hundred times, and each time, the dog barked, even when I got up in the middle of the night, snuck downstairs, and silently turned on the light. Imagine further that I hired an electrician to follow the wiring, and they found nothing out of place. Let’s say I went over to the neighbor’s house and examined the spot where the dog was tied up, and even put the dog’s leash and collar on myself and lay down in the dog’s spot and had my sister-in-law turn on the light and felt no effect—except that the dog still barked, uncollared, standing next to me.

At this point, I would be pretty frustrated. I could respond in any of several ways that are perfectly legitimate, and one way that is not. It would be reasonable, if a tad incurious, for me to say with a shrug, “Forget it. I have a life to lead, bills to pay, more important demands on my attention,” and put a piece of masking tape over the light switch so no one ever used it again. Or I could double down. The electrician must have missed something. I will hire a different one and pay even more money. I will hire an acoustic engineer and see if they detect high frequency sounds that the dog responds to. This stubborn determination is also, in its way, reasonable. I could even get a little flaky, and decide that, having exhausted the powers of ordinary science, I should hire a medium, hold a seance, burn some incense, align some crystals with ley lines, and that kind of thing. Even if you don’t put much faith in that stuff, I would argue that at least it is, in some sense, scientific. I would be trying to find a link between a cause and an effect.

Which brings us to the illegitimate response. It would definitely be invalid to claim that the dog barking just is the kitchen light switch being turned on (perhaps, however, viewed under a different mode of presentation or some such). In response to this “explanation,” I could propose a zombie version of the scenario: imagine a world in which I turned on the kitchen light switch, and the dog didn’t bark, and only the kitchen light turned on. Someone could claim that such a scenario is inconceivable, and that if I knew all the details, and thought deeply enough about them, I would see that it is not just wrong but incoherent that anyone could turn on the kitchen light without the dog barking. I would argue against this claim, and if we felt like it we could argue all night about whether the zombie scenario is logically conceivable or metaphysically possible or vice versa, but unless you accept the claim that the dog barking just is the kitchen light switch being turned on at face value, you can’t rule it out. The best you can do is to throw up your hands and say there is a brute correlation between the kitchen light being turned on and the dog barking, and we can go no further. What you cannot say, however, is that it should have been obvious beforehand that the kitchen light switch being turned on entailed the dog barking, and we should have expected it if we thought about it harder. My accusation to physicalists is that their arguments boil down to a claim that phenomenal consciousness just is the working out of the easy problems—information processing, computation, and such—and as with the barking dog, there are a few more pieces of the puzzle left for us to find.


Back in medieval times, scientists (“natural philosophers”) thought that heat was some kind of invisible fluid. When you placed something cold near something hot, the fluid equalized by flowing from the hot thing to the cold thing, until they both were the same lukewarm temperature. This fluid hypothesis checks out from an intuitive, folk-physics point of view. It seems to explain a lot of what we observe in the real world. Later, of course, we figured out that heat is the mean kinetic energy of molecules—that is, their average momentum. For molecules of a given mass, when they are slow, that’s less momentum, and less heat. When they are fast, that’s more momentum, and more heat. When you put a hot thing near a cold thing, the speedy hot molecules collide with the sluggish cold molecules, and transfer some of that momentum, and eventually everything becomes lukewarm.

The important thing here is that the molecular momentum does not give rise to heat, or produce heat, or serve as a necessary condition for heat. The molecular momentum just is heat, and heat just is molecular momentum. Every single empirical result from any experiment you could ever perform about heat is explained by this hypothesis. It is awkward and impractical to speak in terms of molecular momentum (“You want a sweater, Grandma? The average molecular momentum of the gas molecules in this room has dropped below your usual comfort threshold.”). Because there are interesting and surprising (to us) things that happen when molecular momentum is transferred at large scales, we study convection, conduction, and radiation of heat as if they were forces in their own right, but no one doubts that it’s all just molecular momentum. Once God nailed down the truths about molecular momentum, there was no more work to do (nor any work He could do) to come up with the “higher-level” laws of thermodynamics.

Am I Asking Too Much?

This is what we are shooting for. I want a theory that says consciousness just is X, with no remainder. Is that fair? Is this kind of austere reductionism demanding too much of my opponents, the physicalists? I think it is fair. This is how science works. It is an inherently reductive enterprise, with nothing but efficient causation, no final causation. All the causing is done from behind, all the constituting is done from below. No new properties are allowed to slip in between the layers. This is what Rutherford was getting at when he said that all science is either physics or stamp collecting.

There is nothing wrong with higher level sciences, and as a matter of practice, they have a wide-open future ahead of them, but they are, in a certain principled sense, derivative. There will always be meteorologists, and knowing a lot about physics won’t give you much of a leg up when you start studying the science of weather. No one thinks that we could or should plot the trajectory of a hurricane by calculating each molecule of water and air that makes it up. Meteorology has its own ontology, its own laws, constants, and the rest of it. It is a free standing science in its own right.

Nevertheless, no one doubts that, in principle, a hurricane just is all those molecules, and there is nothing going on in the dynamics of the hurricane that isn’t 100% entailed by the physics of those molecules. Once you lock down all the micro-facts about each individual molecule, there are no more degrees of freedom left available to the hurricane.

As a methodological or practical matter, we invent intermediate “levels of organization,” and coin new words to talk about them and their dynamics. We could never, ourselves, deal with hurricanes in terms of their component molecules. While that would be astronomically complicated in practice, in terms of how our universe is put together, it is really quite simple. As the universe decides how to behave from one moment to the next, it only needs to know about the lowest level. So it is with all the sciences. The explanatory gap, the failure of entailment, with regard to consciousness should embarrass us.

I demand of the physicalists that they not engage in mushy thinking, that they be thorough in their own reductive project, that they apply their precision and rigor all the way down. We can have intermediate levels, and objects, laws and all the rest of it, but only if we remember at all times that this is a convenience to us, due to our limitations, and really just shorthand for a much more complicated story going on at the lowest levels. There is a reason physicists call their eventual theory that will unite relativity and quantum mechanics the Theory Of Everything (TOE), and not the Theory Of All The Low Level Stuff (TOATLLS). They are not shy about what they think is entailed by getting the microphysics right.

If physicalists carry out their own project with integrity, consciousness will stick out as an inexplicable problem for their picture of reality (in which case they won’t be physicalists anymore), or they will be forced to fall back on eliminativism. This is ultimately the position that most of them take, however they sugar-coat it. It means they eliminate consciousness by basically denying that it exists in the Hard Problem sense. It comes down to something like “I report on red, I respond to red, but I simply don’t know what you are talking about when you speak of the inherent redness of red.” This, to me, is the equivalent of a little kid sticking her fingers in her ears, “La la la! I’m not listening!” It is one thing to eliminate an explanatory hypothesis once it has been disproven as an explanation or superseded by a more plausible one, as in the case of the elan vital or the luminiferous ether. It is another matter to deny the existence of the explanandum, the thing we are trying to explain, when it sits before you staring you in the face.

My instincts and temperament in all this are not woo woo or mystical. Quite the contrary. Ultimately, as a qualophile, I am the hardest of hardcore reductionists. I like reductionistic explanations. I want to see the reduction. Show me the money, no hand-waves allowed. Show me heat, and make it obvious that it just is average kinetic energy. Show me a hurricane, and explain that it just is water molecules, even if it is inconvenient to deal with them at that level. Show me consciousness, and make it clear, at least in principle, that it just is billiard balls banging around. If we are going to be good reductionists, when we can’t do that for any given phenomenon, maybe we’ve hit bottom. Maybe we’ve got something that is already as low as we can go in our analysis, even if it seems surprisingly big, or complicated, for the kinds of things we like to think of as occupying the lowest levels in our reductive pictures of the universe.

Why I Am Optimistic

Lately, there has been a surge of interest in consciousness, and a growing acknowledgment that there is a deep, deep problem here. It is exciting to bear witness to a critical mass of smart people coalescing around a problem like this. It is a great thing to be here now. In general, I am impressed with the integrity of the inquiry so far. Almost without exception, the contemporary books and articles about consciousness I have read are written by honest people just trying to get to the heart of the problem. They use plain language, and are willing to admit what they don’t know. This bodes well, I think.

People have been trying to figure out consciousness for millennia. Why should we crack this nut now? Basically, we have better tools now. Maybe not good enough tools, but certainly better. Obviously, neuroscience and physics have progressed since Descartes’ day, but we also have some versatile conceptual tools. Along with its explosion of information technology, the last century or so has also seen a great deal of rigorous thinking about computation and symbol manipulation. The closely related field of information theory has also helped us invent a language which allows us to begin to talk about ways in which the brain might work. A century or more ago, the operative conceptual model of mechanistic functioning was the steam engine. Now the operative conceptual model is the computer, which, while insidiously misleading in some ways (I think), is a step closer to the truth. At least it is more illuminating to think about why minds are not like computers than it is to think about why they are not like steam engines.

Besides the deep thought we’ve done about computation and information, we have also discovered quantum mechanics in the 20th century. Besides the implications of quantum physics itself (more on this later), quantum mechanics has forced us to think hard about what we are doing when we do physics, the limits of physical explanations of anything, and where physics ends and philosophy begins. So maybe we will make it over the hump this time, or maybe we will fall back, fall apart, and the problem will lie dormant for another 50 years. I can’t tell. I just hope we break through in my lifetime.

Some physicalists say that the qualophiles are crazy to worry so much about consciousness, and that the ancients had the excuse that so much of the natural world was mysterious, and the mind was just another mystery. Now that we know so much about neuroscience and computation and stuff like that, we have no such excuse. It is a weird contrarian anachronism that now, of all times, some perverse collection of philosophers has decided that consciousness does not fit into the natural world. They’re almost like flat-earthers.

I think it goes the other way. It is precisely because we know so much that this problem is rearing its head now. We know more all the time about how neurons work, and we can reduce them confidently to the atomic level. While we don’t have all the details yet, we can see the trajectory of science, information theory, etc., and can get some sense of the outer perimeter of what they could ever tell us. We can think more clearly than ever before about the kinds of questions we can ask them and the kinds of questions they are equipped to answer. Our blind faith, scientism, is giving way to a more mature and realistic sense of the quantitative sciences as tools, incredibly well suited to some tasks, but not so much for others. Laboratory results are great, indispensable even, but the current impasse will only be resolved by a conceptual breakthrough, a shift in our way of thinking. We may have to expand what we think of as science, its proper aims and methods, in a way that does not throw the baby out with the bathwater. We stand now at one of those rare moments in history in which philosophers may actually contribute something useful.

If we were to conduct a little office pool, I’d give it several decades. The state of the field of consciousness studies is somewhat analogous to the state of physics in the year 1900. Most physicists at the turn of the 20th century thought that they pretty much had the basic conceptual apparatus, and just needed to flesh out the details (Max Planck’s physics teacher famously advised him to take up the piano, as there was nothing left to do in physics but fill out a few more decimal places). But by 1900, there were some experimental results which could not be explained within the current theories (the so-called black body radiation experiments). Some people were beginning to suspect that they were missing a big piece of the picture. This is essentially where we stand with consciousness. The year in which we finally had a complete, unified quantum theory is usually given as 1927, so I figure we’re facing a few more decades of flailing, plus a margin of about 50% because we don’t even have the same sort of firm Newtonian style framework for consciousness that physicists did in 1900.


Physicalism: Are We Really Living in a Material World?

As scientifically literate modern people, we understand that the world around us is made of physics. However much we may like to coin “higher-level” terms and use “higher-level” concepts in our day-to-day lives, and even in our scientific explorations, it all comes down to physics sooner or later. I have already said that the first bullet we have to bite in this book is acceptance of the impossibility of explaining consciousness in these terms. If we take the Hard Problem seriously, and we must, we have to confront the need for something fundamental to explain it. Physics is about as fundamental as it gets, in terms of our descriptions of the world, but all the Hard Problem arguments suggest that we can’t implement consciousness on a purely physical substrate. What is a “purely physical substrate,” though? Besides the apparent mystery of consciousness, there is also a mystery of physics. What is the substance of the physicalist’s claim that “it all comes down to physics”? Is there any wiggle room there, any ambiguity?

The term “physicalism” may be interpreted in at least two different ways. First, it may be taken to mean the claim that the stuff that the laws of physics describe is all there is in the universe. There is no mysterious other stuff, no magic spray applied to reality above and beyond the photons and electrons, etc., all of which behave strictly in accordance with physical laws. This sounds like a simple enough claim, at least to the extent that one ought to be able to say whether or not one agrees with it, but (bear with me) even this is a little ambiguous.

There’s No Such Thing as a Purely Physical World

The second interpretation of the term “physicalism” is the somewhat stronger claim that not only is the stuff that physics describes all there is, but the laws of physics are a complete description of that stuff (or will be, as soon as we complete our laws of physics). I would argue that this second, stronger type of physicalism is definitely false, whether or not you buy any of the Hard-Problem-of-consciousness arguments.

A good physicist (which is to say a philosophically humble physicist) will tell you that physics provides a way of predicting the outcomes of certain experiments, and that is all. Strictly speaking, the famous Copenhagen interpretation of quantum mechanics applies across the board—“shut up and calculate.” If you set up a ramp and roll a ball down it, and you measure all the angles, weights, and stuff like that, you can use physics to tell you things like how fast the ball will be moving at the bottom of the ramp, how long it will take, and how much momentum it will have. If you can ask your questions quantitatively, in a lot of cases physics can (at least in principle) give you quantitative answers and predictions.

This claim may seem like a bit of a straw man, in that I am accusing physics (true physics) of being committed to what might be called brute instrumentalism. Surely there are variations and subtleties here. Not every true physicist really believes only in instrument readings. They use models of external reality in their theories, for instance, that go beyond an actual particular readout on a screen. That may be so, but for the point I’m making here, it more or less amounts to the same thing. The intellectual framework of physics, the vocabulary, the methods, the formalisms and theories, are all predicated on empiricism, on testable results, which ultimately means causal dynamics. Physics is not metaphysics—it does not pretend to describe the ultimate nature of reality. As a matter of fact, it cannot, even in principle, describe reality “all the way down.”


Each hard science rests, in a sense, on the science below it (biology rests on chemistry, chemistry rests on physics). This is to say that, for example, once all the facts about the physics of the universe are fixed (all the physical laws and all the positions and momenta of all physical particles), it is automatically true that the chemistry of the universe must be the way it is, and it could not be any other way. The physical laws and facts necessarily entail all the chemical laws and facts. Another way of saying this is that the facts about the chemistry of the universe are a logical consequence of the facts about the physics of the universe. There is simply no way you could have two universes that were physically identical, but chemically different. In the same way, the chemical facts, in turn, logically entail the biological facts, and so on up through the layers of science. As far as the hard sciences are concerned, once God invented physics in all its detail, He was done—He had no more work to do to invent chemistry or biology. The fancy philosophical word for this is supervenience. We say that chemistry supervenes on physics, because chemistry constitutively depends on physics. Chemistry just is physics, looked at (by us) a certain way, chunked up (by us) a certain way.

Each layer in this pile of science consists of (a) extrinsic functional properties (which, taken together, support or implement the layer above), and (b) intrinsic properties (which are supported, or implemented, by the extrinsic functional properties of the layer below). The field of biology studies biological entities which behave the way they do ultimately because of their chemistry. Chemistry studies compounds which behave the way they do ultimately because of physics, which these days means quantum mechanics. Quantum mechanics behaves the way it does because…?

At the lowest layer of physics we can, in principle, only know the extrinsic functional properties, those which give rise to the macroscopic physical world we see around us. All we have to describe the world at that level are the famous Schrödinger equations. We do not know, and we cannot know, the intrinsic nature of matter and energy described (with nearly 100% accuracy) functionally by these equations. We can say quite accurately how matter and energy behave at the lowest levels, in terms of how they impinge on other matter and energy, but we can’t say anything beyond that about what it is that is doing the behaving. Something’s functional characteristics are perfectly described by the equations of physics, but we will never be able to know what that something is. Some people (including most practicing physicists) say that there is no “something else” besides a perfect functional description, and that once you have specified how something behaves at the lowest level of physical reality, there is nothing left to talk about. At the very least, it makes no sense to speculate about such things.

Unimplemented API

To use an analogy from computer science, it is as if each layer of natural science could be thought of as a program module. Each module is implemented a certain way, and each presents an API (application programming interface) to the level above. Each module makes use of, or calls down into, the API presented by the level below. Each module does not, and should not, know or care how the level below is implemented, as long as the lower-level module faithfully presents the correct API. But suppose that out of curiosity, although we operate at a certain level, we wonder how the API we use at our level is implemented at the level below. So we read the source code of the module below and find that it, in turn, relies on an API presented to it by a module still further down. It seems a bit absurd to me to suppose that at some low level we get to the magic API that just is—that is, the API that exists only as an API, but which is not implemented at all!

Rosenberg’s Game of Life Physics

Gregg Rosenberg (1998) once used an analogy with the game of life. The game of life is a scenario invented by the British mathematician John Conway. It consists of a (possibly infinite) two-dimensional grid of bits, or pixels, like a big sheet of graph paper. At any moment, each square of the grid is either on or off, 1 or 0. There is also a clock of sorts, in that we speak of the state of the grid at time t, where t is an integer. We begin the game with some configuration of on and off squares on the grid, at time 0. For each subsequent tick of the clock, the state of each square on the grid depends on the state of its eight surrounding neighbors at the previous tick according to the following formula:

You can think if it as a population, in which individuals die of loneliness if they have too few neighbors, but also die if they are too crowded, and can be conjured from death to life if they have exactly three neighbors.

Computer science students are often made to write programs to implement the game of life, and display the grid on a screen, with the clock ticking at one tick per second or so. If you start with a random splatter of on and off pixels, and just let it rip according to the above rules, as the clock ticks along some clusters of on pixels die out, some grow, some move, and some even eat others. Much has been written about the fascinating complexity that arises from these simple rules. Rosenberg asks us to imagine the game of life as a toy physics, and consider a two dimensional universe in which the rules listed above were the only laws of physics in that universe. He then asks whether consciousness could exist in such a universe (it has been shown that one can implement a Universal Turing Machine—a computer—in the game of life).

Ha—trick question! The rules, as laid out in the game of life, can’t serve as a complete specification of a universe, even a toy one. What does it mean to have a pure game of life universe? What does it mean for a square to be on or off? These are properties whose only specification within the game is that they be distinguishable from each other: what is on? It’s not off. What is off? It’s not on. The properties of on and off are circularly defined, and the rules, then, are defined in terms of these circularly defined properties. Whenever we implement the game of life, we represent on and off with checkers on a board or, more often, electronics in a computer. For us, these properties must be instantiated by some substrate.

You couldn’t have a “pure” game of life universe, because the rules and properties as specified underdetermine the universe. There is no such thing as a “bare” property, characterized entirely in terms of its contrast to other properties, but this is exactly what a pure game of life universe asks us to imagine. Rosenberg used the game of life as a toy physics to make the point, but as he says, our own real physics is in no better shape. It is more complicated, so the circle is a bit larger, but “pure” physics in our world makes no more sense than it does in the game of life world.

What’s at the Bottom? Information? Nothing?

An electron is as an electron does, but what is it in there doing the doing? Physics is a castle in the sky, an elaborate structure built on a foundation of nothing. Or, rather, built on circularly defined terms, much like a mathematical system. Each of the lowest-level things that physics deals with (the fundamental particles and forces) is defined mathematically in terms of the other particles, forces, or some constants. Even the properties ultimately come down to behavioral dispositions, predictable causal dynamics. Mass, for instance, is resistance to acceleration. Everything in physics, then, is defined relationally, in terms of the other things in physics. Physics gives us a schema, a description of causal dynamics, but it is inherently silent about the stuff doing the causing. Physics is a playwright who writes the dialogue but leaves the casting to someone else.

Any system whose parts obeyed the same relations among themselves, or whose parts interacted according to the same patterns of interaction that our physics do would automatically have identical physics to our universe, no matter what its parts “really” were. We could, in principle, transpose our physics to another universe made of entirely different stuff, as long as the causal dynamics of that stuff matched perfectly the causal dynamics of the stuff that instantiates our physics.

Another way of saying this is that the physical universe is multiply realizable. Given a complete and perfect set of physical laws and physical facts, even though all the other hard sciences would be locked in place, God would have still more work to do before He had a complete recipe for a universe. He could create any number of different universes, made of different stuff, but which were physically identical (and thus chemically identical, and biologically identical, etc.), as long as the structures of the causal dynamics among whatever He chose to make each universe out of were identical. It would be impossible for an inhabitant of any of those universes, from within the science of physics, to get underneath the physics and see the intrinsic nature of the matter out of which his or her particular universe was made. This is true completely without regard to any questions of consciousness.

There is only one kind of stuff in the universe, but physics is inherently incapable of completely describing that stuff. This unknowability of the intrinsic properties of the lowest level of reality is going to be a problem for (or at least an aspect of) any physics (as the science of physics is currently construed), and is not particular to quantum mechanics. There is always going to be, in principle, a gap at the lowest level of our descriptions of the natural sciences. Bertrand Russell made the point quite nicely in a couple of quotes: “The only legitimate attitude about the physical world seems to be one of complete agnosticism as regards all but its mathematical properties” and “Physics is mathematical not because we know so much about the physical world, but because we know so little: it is only its mathematical properties that we can discover. For the rest, our knowledge is negative.” Indeed, the explicit decision to carve off extrinsic causal dynamics as a legitimate subject of scientific study, while slapping a “There be dragons here” sign on any questions of intrinsic essences, is the titular error in Philip Goff’s Galileo’s Error (2019).

There is a world of difference between saying, “Because we can’t know what is at the bottom rung of the ladder, we must remain humbly silent and agnostic,” and saying, “Because we can’t know or talk about it, it must not exist.” It is this second, positive claim that I do not agree with. We are asked to believe in a world composed of pure causal disposition with literally nothing doing the disposing. Some people, when confronted by the fact that physics all comes down to circularly defined equations and/or algorithms, draw exactly the wrong conclusion: that our universe is mathematical or algorithmic at its core. Since no matter how advanced our particle accelerators, no matter how true our theories, all of physics must rest on abstract equations, abstract equations must lie at the bottom of the physical world. Electrons, by this way of thinking, are made of information, quarks are algorithms. This idea was championed by the theoretical physicist John Wheeler, who called it “It from Bit.”

The map is not the territory. Just because all of our ways of talking about physics must, in principle, bottom out in a cluster of equations, it does not follow that the stuff we are talking about is made of equations. There is still something down there doing the equating. We just can’t know what it is, or anything about it other than its outwardly efficacious participation in the causal mesh, which is described so well by the equations. As our technology becomes more and more refined, we can represent more and more information with less and less physical stuff (vacuum tubes to transistors, to integrated circuits, with ever more diodes crammed on a chip). To imagine, however, that the universe itself has perfected its “technology” to the point where it can leave coarse physical matter behind entirely, and instantiate “pure” information, information in itself, is nutty. It is an old, familiar kind of nuttiness, however. It is the same late medieval Platonism that led thinkers to hypothesize concentric crystalline spheres of increasing rarity and fineness around the Earth, with angelic ether filling the void between them and producing music we are too base to hear.

On the one hand, we have a hole at the lowest level of our best descriptions of reality, and on the other hand we have an inconvenient extra ingredient, consciousness, that doesn’t seem to fit anywhere in our descriptions, but probably lives at a pretty low level. The idea that the extra ingredient might fit in the hole has been explored by Alfred North Whitehead, Russell (1954), and Rosenberg (2004). It is essentially the idea behind panpsychism.

Philip Goff (2019) makes the case succinctly. He notes this hole at the lowest levels of our best descriptions of reality—the fact that we describe only behavior and are silent about the actual stuff doing the behaving—and he calls it the Problem of Intrinsic Natures (capitalization his). He goes on to say:

While in the mindset that physics is on its way to giving us a complete picture of the nature of space, time, and matter, panpsychism is absurd, as physics does not attribute experience to fundamental particles. But once one absorbs the problem of intrinsic natures, the universe looks very different. All we get from physics is this big black-and-white abstract structure, which we must somehow fill in with intrinsic nature. We know how to color in one bit of it: the brains of living organisms are colored in with consciousness. How to color in the rest? The most elegant, simple, sensible option is to color in the rest of reality with the same pen.

Panpsychism, at least this form of it, is resolutely monist: there is only one fundamental kind of stuff in the world. This is why I don’t like calling panpsychists “dualists” and why I prefer the clearer term “qualophiles” for the whole Hard-Problem-citing, zombie-conceiving, metaphysically speculating lot of us.

Panpsychism? But Doesn’t That Have Huge Problems?

“All right,” one might reasonably argue, “maybe we can’t know what a quark really is, we can only know exactly how it behaves. So what? My world and my understanding of it, including the laws of physics, remain exactly the same, no matter what the intrinsic nature of a quark really is.” To base a theory of consciousness on this unknowability within science of the lowest levels of reality, we have to say not only that this hole at the bottom of physics is filled by some form of proto-consciousness, but that there is some way this stuff, as such, scales up to the level of human minds. Even if some spark of consciousness instantiates the extrinsic behavior of quarks and electrons, those sparks stay atomized at the quark level, and everything else plays out according to the normal laws of physics. In terms of “explaining” human consciousness we are thrown back upon conventional physicalism. The causal dynamics scale up to our level with the underlying qualitative implementation of quarks not having any role in my seeing the redness of red. So now you’ve made the situation even worse, since (a) you haven’t solved the problem of high-level consciousness and (b) you have needlessly cluttered up our picture of how the universe is put together.

This is known as panpsychism’s combination problem. For large, complicated things like ourselves to be conscious in some special way that outruns anything we might expect to emerge from the causal dynamics, we have to explain how this stuff scales up from the level of a quark to the level of a mind.

We also have to say how human-scale consciousness could be meaningfully efficacious. What could it possibly buy us in terms of its effect on the world beyond simply instantiating the lawful low-level regularities that science has already mapped out so accurately? Given the apparent causal closure of the physical world (causes and effects match up perfectly within normal physics, with no need for, or even room for, any magic whatsoever) how could it do so in a way that would add anything to what we know from our physical laws and facts, but that would not also violate those laws and facts? It seems that, at best, such consciousness would be, as the philosophers say, epiphenomenal: it can’t do anything.


Epiphenomenalism: Even If Consciousness Is Real, What Could It Possibly Do?

Epiphenomenalism is the claim that even if consciousness is real in the Hard Problem sense, there is no room for it to be causally efficacious. That is, we may really see red and feel pain in ways that are irreducible to the mindless unconscious interactions of our brains’ neuroanatomical parts, but our consciousness is a helpless observer. The mindless unconscious parts still do their mindless unconscious work, including controlling our muscle movements and speech, while the consciousness stays trapped in the press box, experiencing it all, including the delusion that it itself is controlling anything.

The main argument for epiphenomenalism is that, since we know physics pretty well, and we are getting better all the time at neuroscience, sometime in the not too distant future we should be able to solve all of Chalmers’s “easy problems.” That is, we will be able to characterize all of our behavior (even the “behavior” of our mental processing, stripped of any considerations of subjective qualitative consciousness) strictly in terms of nuts and bolts neuronal processing without recourse to notions of consciousness. The physical world is causally closed. That is, every physical thing that ever happens has an understood physical cause. Therefore, there is no way that some hitherto undiscovered mysterious force of consciousness could have any physical effect, including the effect of making my neurons fire, my muscles move, etc.

If a perfectly accurate physical account can be given of every neuronal event that happens as I type this, or as I comment on the beauty of a sunset, and this account is given strictly in terms of ordinary physics, this puts qualophiles in an awkward position. Subjective consciousness, if it exists in the Hard Problem sense, would appear to be redundant, an extra, a loose thread hanging off the natural world, or it would violate the laws of physics.

So does consciousness just watch the processing, without influencing it at all? I know I am conscious in some way that can not be reduced to a functional description of the causal interactions of my micro-parts, and my consciousness certainly thinks that it is in control of my fingers as I type this. It thinks (or experiences) that when I write about how subjective consciousness feels, each word I write is dictated (or at least strongly influenced) by my actual, immediate perception of how subjective consciousness feels.

The epiphenomenalist would have us believe that this is not true, that there is no real contact between the physical body and brain on the one hand and consciousness on the other, or at least only one-way contact (which is problematic in its own right). So, while my consciousness has the phenomenal perception of writing a sentence about consciousness, and of commanding fingers to press certain keys on my keyboard, the completely unconscious mechanistic brain is really ordering the very same fingers to type out the very same sentence.

Essentially, as far as our actions are concerned, including the ones we most closely associate with qualia, we are zombies. All of our exclamations about how much we love the taste of ice cream are generated by “easy problem” neuronal mechanisms. We just happen to have a parasitic phenomenal consciousness along for the ride, one which is deluded into thinking that it is calling the shots. For this to be the case, of course, the mechanistic processes would have to maintain absolutely perfect synchrony with my actual consciousness throughout my entire lifetime, or my consciousness would notice the discrepancy. It is as if, given a puppet dancing on a stage, we were told that the puppet is really doing the dancing by itself, but so well, in such perfect sync with the puppeteer pulling the strings, that the puppeteer never catches on.

There are some ideas, the old saying goes, that are so preposterous only a philosopher would take them seriously. No, there is no knock-down purely logical argument against epiphenomenalism, but as would-be scientists, we should feel comfortable discarding the more wildly implausible ideas, and epiphenomenalism is such an idea. Evolutionarily, why would nature have played such an elaborate trick on us? Why not just evolve us as zombies and have done with it? It’s almost as though epiphenomenalism was cooked up as an idea guaranteed to make everyone unhappy. The physicalists hate it because it takes qualia seriously, and no qualophile wants to admit that qualia exist, but don’t do anything.

In the epiphenomenalists’ defense, there is nothing mysterious about the synchrony between puppeteer and puppet if some third party is actually controlling both of them. The fingers type a sentence about consciousness mechanistically, and the subjective consciousness says (and believes), “I meant to do that.” Some volitional center could be controlling both our actions and our experience. In this case, we paint our thoughts with a much thinner coat of qualitative consciousness than we might otherwise think.

In our more generous moods, we might believe that the mechanistic zombie part of us is very complex, and it is worth its while to do some cognitive garbage collection and house cleaning, to investigate and thereby improve its internal mechanisms for absorbing, digesting, and applying information about the world and itself. Self-knowledge, even understood purely functionally, has definite behavioral advantages for a complex enough system. Perhaps our purely cognitive machinery has evolved to constantly self-evaluate, to second-guess all of its conclusions and perceptions. Might not such a system “notice” that at some low level of internal representation it could probe no further, that it could not get inside its seeing of red, for example? Might this impasse attract the system’s attention? Could such a system’s self-probing possibly end up being externally articulated, like Chalmers’s book or this one? Would the system ever come up with an idea like epiphenomenalism? After all, it was the mindless mechanistic neural processing which typed this very paragraph, completely unaided by my consciousness, according to the epiphenomenalist. It is not immediately obvious that the answer to these questions is no. In effect, one can imagine that there are zombie, cognitive, “easy problem” analogs to all of our qualia.

It could be, then, that while there is a consciousness in the Hard Problem sense, it monitors unconscious cognitive processing, as if it had a lot of diagnostic probes alligator-clipped onto exposed wires, so to speak, at various stages of this processing. This almost makes epiphenomenalism respectable, but it is still pretty implausible. If the coupling of qualia to functional states and mechanisms is so very tight that every qualitative state is dictated by a functional state, to the extent that even my wondering about consciousness corresponds perfectly to some functional self-diagnostic probing, epiphenomenalism becomes a moot point.

There are not, then, two distinct parts, a mindless functional part and a helpless (but deluded) conscious part; instead, there is just one mechanism which has a qualitative aspect. We are aware of every decision we make, every action we perform as our own, because at a fine-grained level our immediate conscious experience is of the very mechanism that is actually doing the driving. If my mind’s functioning has two aspects, cognitive and experiential, can we even say that one aspect is doing all the willful work and not the other? If you couple the two aspects (functional and experiential) closely enough to make epiphenomenalism plausible, then you couple them too closely to say that one is efficacious and the other is not.

The challenge of epiphenomenalism is a huge bullet the qualophile has to bite. It’s bad enough that we think there is something called qualitative consciousness that can’t be explained using normal causal dynamics and physics, but now we have expanded the mandate. We also have to say how this purported phenomenal consciousness encroaches on the world of causal dynamics and physics in the terms of that world. To be a qualophile who rejects epiphenomenalism is to explicitly reject the causal closure of the physical world. I have said that the word “dualist” is a misleading label for people like me, but epiphenomenalism raises the old challenge to traditional Cartesian dualists: that of interaction between the mental world and the world of physical stuff.

Epiphenomenalism is false. But if this is the case, and qualitative phenomenal consciousness is really guiding my fingers now (as it seems to be) then this spooky mysterious thing called consciousness has macroscopic, observable effects in the real physical world. Where, then is the interface? Why haven’t brain scientists noticed by now that certain neurons fire at certain times for no reason that they can explain with current physics? If you accept the Hard Problem, and you believe that epiphenomenalism is false, then you are committed to the belief that current physics is wrong, or at least substantially incomplete in some sense that allows for an as-yet undiscovered force to have a physical effect. Somehow, large-scale, high-level consciousness (which is to say, phenomenal consciousness as such, in all its qualitative, redness-of-red glory) is able to exert an influence on, for example, motor neurons, and make them do things that they simply would not do if they were only subject to ordinary physical laws without the influence of consciousness.

If this sounds implausible, it is, but if we are honest qualophiles, we are stuck with it. This may, in fact, be the hardest bullet to bite in this whole book. Even my fellow panpsychists tend to tiptoe around the issue, putting something conscious in at the universe’s lowest levels, but letting good old structural and/or causal scaling take it from there up to and including neurons and brains. It is worth noting, however, that whatever nudge this purported consciousness gives to the neurons, in our chaotic world, it need not be a big nudge to make a decisive difference.

When discovered, I suspect it won’t be so much a case of some single event happening that we can’t explain, as it will a lot of events, each of which should be random according to accepted physical laws, but which happen in sync with each other, or in some pattern, that, once recognized, will be undeniable. Any one of these events, when studied alone, will be seen to obey normal physical laws, but considered together, they will have a pattern and an organization that we cannot account for with normal physical laws. I imagine that the influence exerted by consciousness on physical systems will ultimately be compatible with the established laws of physics. This, of course, is pure speculation, but it points us in a certain direction. If we are to take the Hard Problem seriously, and if we reject epiphenomenalism, we are placing our bets on some high-level, large-scale process, structure, or field that has qualitative content and influences physical things through a loophole in physics.


Ned Block’s Turing Test Beater

The Turing Test

The Turing Test is a test for machine intelligence devised by the British genius Alan Turing in the middle of the 20th century. The idea is this: a person (the judge) conducts a typed conversation with a system. If after some period of time of chatting in this manner, say half an hour, the judge cannot determine that the system they are talking to is not human, then the system is intelligent.

In my opinion, a system that passes the Turing Test is precisely a system that passes the Turing Test (and is therefore remarkable), but it is not necessarily intelligent (in a sense that does justice to our intuitions of what this term means, at any rate), and certainly not necessarily conscious. Turing himself did not mention consciousness explicitly when he formulated the test. Nevertheless, it is tempting to regard any system that exhibits intelligent behavior as automatically conscious as well as intelligent, although I do not necessarily regard such a system as either.

Block’s Answer

Ned Block (1995) had a fascinating response to the proposed test. He suggested a way to beat it with an algorithm. His solution is technically infeasible, but presents a challenge to our thinking about algorithms in general. You know the old saying that infinite monkeys typing would eventually produce the complete works of Shakespeare? What if, instead of letting our monkeys pound away randomly, we got systematic with that approach and really exhausted the combinatorial possibilities?

Let us say that the test lasts half an hour. Let us also say that the communication line between the judge and the system under test (let us just call this the system’s side of the conversation) is somewhat slow, but fast enough not to be frustrating to an average human typist—say, 50 characters per second. Let us also say that both parties are capable of typing upper- and lower-case letters, the numerals, the common punctuation marks—say, 100 different characters in all. Given that both ends of the conversation can type for the entire duration of the test (perhaps simultaneously), each of them may type any of 100 characters (or no character at all) each 50th of a second during the entire half-hour test. That means there are exactly 100 to the power of (2 (parties) × 50 (characters per second) × 60 (seconds per minute) × 30 (minutes in the test)), or 100180,000 different entire conversations that could possibly take place during the half-hour test, from both parties holding down the “a” key for the whole half hour, to both of them holding down the “z” key for the whole half hour.

Now, imagine that we write a simple computer program to generate each of these possible conversations, and that we submit the resulting (staggering) pile of transcripts to a vast committee and give them a huge amount of time to sort them into two piles: pile A, that contains all the conversations in which the system side of the conversation seemed non-human, and pile B, the (much smaller) pile in which the system side of the conversation seemed to conduct a conversation that would pass for rational human conversation to an average person.

Note that pile B contains the rational-seeming responses on the system side of the conversation, even if the judge’s side is gibberish—pile B is selected only on the basis of the reasonableness of the system side of the conversation. In fact, it contains rational-seeming responses to all possible conversations from the judge’s side (there are 100 to the power of 50 (characters per second) × 60 (seconds per minute) × 30 (minutes in the test), or 10090,000 of them). Moreover, it contains, for each of the 10090,000 possible judge’s sides of the conversation, all possible rational-seeming system sides of the conversation. After all, given any particular judge’s side of the conversation, how many ways are there of filling in the gaps so that the system seems to respond as another human would? A lot.

The committee would then throw pile A out. They would take pile B, the one with all the coherent, human-seeming conversations on the system side, and load this pile into a computer, along with a very, very simple program. Once the test started, the program would only choose randomly, each 50th of a second, from among the conversations in its memory that are consistent with everything that has already been typed by both sides of the conversation. Once it has chosen a conversation that meets this criterion, it simply types out the character that the conversation says the system should type out at that particular 50th of a second (or no character at all, if that’s what the chosen conversation specifies).

This program could be written in about half an hour by any decent programmer, and it would be guaranteed to pass the Turing Test, using this huge pile of canned responses, assuming the vast committee exercised proper judgment in deciding which conversations appeared to be human and which did not. The intelligence in such a system is in the data, programmed in by the human committee, and clearly not in the tiny, stupid execution engine that reads and acts on the data. Given that the Turing Test supposedly tests for machine intelligence, not the intelligence of the human programmers of the machine, I think that most people would agree that to characterize such a system as conscious or even intelligent misses the point of consciousness and intelligence.

Assuming that you accept that Block’s machine is not conscious (even if, by some characterizations of the term, it is intelligent), if you have a favorite computer architecture that you think is conscious, you really should specify where the difference is between your machine and Block’s. Some people insist that a truly conscious computer must be a parallel processing machine, with many processors (inter)acting together. But it has been shown that any parallel processing computation can be emulated perfectly well on a single processor. (For each timeslice, you make your single processor simulate each of the parallel processors in turn for that timeslice. Then you move onto the next timeslice. So the whole computation just takes n times as long as it would on an n-processor parallel machine.)

Is Block Cheating?

Block’s “algorithm” would clearly pass the Turing Test, but in a really dumb way. It combines a dead simple execution engine with a massive flat table of raw data that the engine indexes into. This seems to violate the spirit of the Turing Test, and the spirit of what we call algorithms. This sense of discomfort with his solution is the point, and it is why this thought experiment is relevant to this book.

Block’s machine is monstrously complex—as complex as any you could propose—and the complexity is in the table. In essence, the table is the algorithm. Whatever your favorite conscious architecture, it should be clear that its outward behavior would be exactly matched by that of Block’s machine. There is some mapping between your machine, with its models-of-self, or its Darwinian memosphere, or whatever, and Block’s machine. Both machines are doing the same thing. The only difference between Block’s table-driven Turing Test beater and any more “intelligent” algorithm is purely one of optimization, implementation, and engineering efficiencies.

The difference between the two algorithms is one of encoding, much like the difference between a program written in assembly language and one written in Python, or the difference between an uncompressed file and one that has been shrunk with a data compression utility. Any “true AI” is nothing above and beyond Block’s Turing Test beater, just more efficient, with a lot of redundancies squeezed out. Just because it is easier for you to understand a machine by seeing its bits flipping at a “higher level”, or as “representing” this or that, does not make it so.

We have a comfortable intuition that the “true AI” is doing something special, but it is doing the exact same thing that Block’s table-driven machine does, and it is doing it in exactly the same way, albeit more optimally from an implementation point of view. But this intuition that the true AI is somehow fundamentally different than the huge table plus tiny execution engine is anthropomorphism on our part. If you think a computer could ever be conscious, you must say why your algorithm is substantially different than Block’s, in a way that does not seem like an arbitrary line drawn to reinforce your intuitions. In the spectrum of algorithms, ranging from a “true AI” and Block’s algorithm, where does the fairy of consciousness wave her magic wand?


Can’t We Just Say That Consciousness Depends on the Higher-Level Organization of the System?

Functionalism, Broadly Construed

Functionalism, roughly, is the idea that consciousness is to be identified not with a particular physical implementation (like squishy gray brains or the particular neurons that the brains are made of), but rather with the functional organization of a system. The human brain, then, is seen by a functionalist as a particular physical implementation of a certain functional layout, but not necessarily the only possible implementation. The same functional organization could, presumably, be implemented by a computer (for example), which would then be conscious. It is not the actual physical substrate that matters to a functionalist, but the abstract schematic, or “block diagram,” that it implements. The doctrine of functionalism may fairly be said to be the underlying assumption of the entire field of cognitive science.

Functionalism seems like a reasonable way to approach the question of consciousness, especially when contrasted with so-called identity theories. Those are theories which say that the conscious mind just is the neurology that implements it, the gray squishy stuff. Identity theories exclude the possibility that non-brain-based things could be minds, like computers or aliens. Functionalism is predicated on the notion of multiple realizability. This is the idea that there might be a variety of different realizations, or implementations, of a particular property, like consciousness. Another way of saying this is that there might be many micro states of affairs that all constitute the same macro state of affairs, and it is this macro state of affairs that defines the thing we are interested in. Put still another way, functionalism says that what makes a system whatever it is, is determined by the high-level organization of the system, and not the implementation details.

Black Boxes

In order to even have a block diagram of a given system, you have to draw blocks. It is tempting to be somewhat cavalier about how those blocks are drawn when reverse engineering an already-existing system, imposing an abstract organization on an incumbent implementation. Functionalism tends to assume that nature drew the lines: that there is an objective line between the system itself and the environment with which it interacts (or the data it processes) and that there is a proper level of granularity to use when characterizing the system. Depending on how fine the granularity you use to characterize a system, and the principles by which you carry out your abstraction of it, its functional characterization changes. It is easy to gloss over the arbitrariness of the way these lines are drawn.

The functionalist examines a system, chooses an appropriate level of granularity, and starts drawing boxes. Once the boxes have been drawn, and their interactions specified, the functionalist does not peek inside them, as long as the boxes themselves operate functionally in the way that they are supposed to. It is central to the idea of functionalism that how the functionality exhibited by the boxes is implemented simply does not matter at all to the functional characterization of the system overall. For this reason, the boxes are sometimes called “black boxes”—they are opaque.

It is worth noting that, as Bertrand Russell pointed out, physicalism itself can be seen as a kind of functionalism. At the lowest level, every single thing that physics talks about (electrons, quarks, etc.) is defined in terms of its behavior with regard to other things in physics. If it swims like an electron and quacks like an electron, it’s an electron. It simply makes no sense in physics to say that something might behave exactly like an electron, but not actually be one. Because physics as a field of inquiry has no place for the idea of qualitative essences, the smallest elements of physics are characterized purely in functional terms, as black boxes in a block diagram. What a photon is, is defined exclusively in terms of what it does, and what it does is (circularly) defined exclusively in terms of the other things in physics (electrons, quarks, etc., various forces, a few constants). Physics is a closed, circularly defined system, whose most basic units are defined functionally. Physics as a science does not care—and in fact cannot care—about the intrinsic nature of matter, whatever it is that actually implements the functional characteristics exhibited by the lowest-level elements.

It could be argued that consciousness is an ad hoc concept, one of those may-be-seen-as kind of things. However I choose to draw my lines, whatever grain I use, however I gerrymander my abstract characterization of a system, if I can manage to characterize it as adhering to a certain functional layout in a way that does not actually contradict its physical implementation, then it is conscious by definition. Consciousness in a given system just is my ability to characterize it in that certain way. To take this approach, however, is to define away the problem of consciousness.

This may well be the crucial point of the debate. I believe that consciousness is not, cannot possibly be, an ad hoc concept in the way it would have to be for functionalism to be true. I am conscious, and no reformulation of the terms in which someone analyzes the system that is me will make me not conscious. That I am conscious is an absolutely true fact of nature. Similarly (assuming that rocks are in fact not conscious), it is an absolute fact of nature that rocks are not conscious, no matter how one may analyze them. Simply deciding that “conscious” is synonymous with “being able to be characterized as having a functional organization that conforms to the following specifications…” does not address why we might regard conscious systems as particularly special or worthy of consideration.

Is the Design Inherent in the Implementation?

A functionalist in good standing believes that, in principle, a mind could be implemented on a properly programmed computer. Put another way, functionalists believe that the human brain is such a computer. But when we speak of the abstract functional organization of a computer system (as computer systems are currently understood), we are applying an arbitrary and explanatorily unnecessary metaphysical gloss to what is really a phonograph needle-like point of execution amid a lot of inert data.

When a computer runs, during each timeslice its CPU (central processing unit) is executing an individual machine code instruction. No matter what algorithm it is executing, no matter what data structures it has in its memory, at any given instant the computer is executing one very simple instruction, simpler even than a single line from a program in a high-level language like C or Python. In assembly language, the closest human-friendly relative of machine code, the instructions look like this: LDA, STA, JMP, etc. These crude mnemonics represent individual basic instructions that the circuitry of the CPU executes. The complexity of a given such instruction varies depending on the particular CPU architecture, but they tend to be pretty simple operations, like: move a number, or a very small number of numbers, from one place to another inside the computer. Or, switch the locus of control (i.e. the next instruction to execute) from one memory location to a different one (jump to a new spot in the algorithm).

Of the algorithm and data structures, no matter how fantastically complex or sublimely well constructed, the computer “knows” nothing, from the time it begins executing the program to the end. As far as the execution engine itself is concerned, everything but the current machine instruction and the current memory location or register being accessed might as well not exist—they may be considered to be external to the system at that instant.

But could we not say that the execution engine, the CPU, is not the system we are concerned about, but the larger system taken as a whole? Couldn’t we draw a big circle around the whole computer, CPU, memory, algorithm, data structures and all? We could, I suppose, choose to look at a computer that way. Or we could choose to look at it my way, as a relatively simple, mindless execution engine amid a sea of dead data, like an ant crawling over a huge gravel driveway. If I understand the functioning of the ant perfectly, and I have memorized the gravel or have easy access to the gravel, then I have 100% predictive power over the ant-and-driveway system. Any hard-nosed reductive materialist would have to concede that my understanding of that system, then, is complete and perfect. I am free to reject any “higher-level” interpretation of the system as an arbitrary metaphysical overlay on my complete and perfect understanding, even if it is compatible with my physical understanding. It is therefore highly suspect when broad laws and definitions about facts of nature are constructed that depend solely on such high-level descriptions and metaphysical overlays.

The higher-level view of a system cannot give you anything real that was not already there at the low level. The system exists at the low level. The high-level view of a system is just a way of thinking about it, and possibly a very useful way of thinking about it for certain purposes, but the system will do whatever it is that the system does, whether you think about it that way or not. The high-level view of the system is, strictly speaking, explanatorily useless (although it may well be much, much easier for us, given our limited capacities, to talk about the system in high-level terms rather than in terms of its trillions of constituent atoms, for example).

Imagine that you are presented with a computer that appears to be intelligent—a true artificial intelligence (AI). Let us also say that, like Superman, you can use X-ray vision to see right into this computer and track every last diode as it runs. You see each machine language operation as it gets loaded into the CPU, you see the contents of every register and every memory location, you understand how the machine acts upon executing each instruction, and you are smart enough to keep track of all of this in your mind. You can walk the machine through its inputs in your mind, based solely on this transistor-level pile of knowledge of its interacting parts, and thus derive its output given any input, no matter how long the computation.

You do not, however, know the high-level design of the software itself. After quite some time, watching the machine operate, you could possibly reverse-engineer the architecture of the software. It is the block diagram of the software architecture that you would thereby derive that a functionalist would say determines the consciousness of the computer, but it is something you created, a story about the endless series of machine code operations you told yourself in order to organize those operations in your mind. This story may be “correct” in the sense that it is perfectly compatible with the actual physical system, and it may in fact be the same block diagram that the computer’s designers had in their minds when they built it.

This only means, however, that the designers got you to draw a picture in your mind that matched the one in theirs. If I have a picture in my mind, and I create an artifact (for example, if I write a letter), and upon examining the artifact, you draw the same (or a similar) picture in your mind, we usually say that I have communicated with you using the artifact (i.e. the letter) as a medium. So if the designers of the AI had a particular block diagram in their minds when they built the AI and upon exhaustive examination of the AI, you eventually derived the same block diagram, all that has happened is that the machine’s designers have successfully (if inefficiently) communicated with you over the medium of the physical system they created.

The main point is that before you reverse-engineered the high-level design of the system, you already had what we must concede is a complete and perfect understanding of the system, in that you understood in complete detail all of its micro-functionings, and you could predict, given the current state of the system, its future state at any time. In short, there was nothing actually there in terms of the system’s objective, measurable behavior that you did not know about the system. But you just saw a huge collection of parts interacting according to their causal relations. There was no block diagram.

classic Rube Goldberg mechanism cartoon

A computer is a Rube Goldberg device, a complicated system of physical causes and effects. Parrot eats cracker, then as cup spills seeds into pail, lever swings, igniting cigarette lighter, which burns string holding back sickle, etc. In a Rube Goldberg device, where is the information? Is the cup of seeds a symbol, or is the sickle? Where is the “internal representation” or “model of self” upon which the machine operates? These are things we, as conscious observers (or designers) project into the machine: we design it with intuitions about information, symbols, and internal representation in our minds, and we build it in such a way as to emulate these things functionally.

The computer itself never “gets” the internal model, the information, the symbols. It is confined to an unimaginably limited ant’s-eye view of what it is doing (LDA, STA, etc.). It never sees the big picture, the little picture, or anything we would regard as a picture at all. By making the system more complex, we just put more links in the chain, make a larger Rube Goldberg machine. Any time we humans say that the computer understands anything at a higher level than the most micro of all possible levels, we are speaking metaphorically, anthropomorphizing the computer.

A Hypothesis about Hypotheticals: Do Counterfactuals Count?

The functional block diagram itself does not, properly speaking, exist at any particular moment in a system to which it is attributed. Another way of putting this is to point out that the functional block diagram description of any system (or subsystem) is determined by an ethereal cloud of hypotheticals. You cannot talk about any system’s abstract functional organization without talking about what the system’s components are poised to do, about their dispositions, tendencies, abilities, or proclivities in certain hypothetical situations, about their purported latent potentials. What makes a given block in a functionalist’s block diagram the block that it is, is not anything unique that it does at any single given moment with the inputs provided to it at that moment, but what it might do, over a range of inputs. The blocks must be defined and characterized in terms of hypotheticals.

It is all well and good to say, for example, that the Peripheral Awareness Manager takes input from the Central Executive and scans it according to certain matching criteria, and if appropriate, triggers an interrupt condition back to the Central Executive, but what does this mean? Isn’t it basically saying that if the Peripheral Awareness Manager gets input X1, then it will trigger an interrupt, but if it gets input X2, then it won’t? These are hypothetical situations. What makes the Peripheral Awareness Manager the Peripheral Awareness Manager is the fact that over time it will behave the way it should in all such hypothetical situations, not the way it actually behaves at any one particular moment.

Any integration of the components in a system that is characterized functionally is imaginary and speculative. If component A pulls this string, and it tugs on component B, then B will react by doing something else… Is there any way of talking about the relation between A and B that does not use the word “if”?

What If We Prune the Untaken Paths?

Imagine that we have a real AI running in front of us. It is implemented on a computer, running an algorithm, operating on data, either in memory or from some kind of input/output devices. This AI is supposedly conscious because it is a faithful implementation of a certain functional layout, specified in terms of black boxes, interacting in certain ways. By the hypothesis of functionalism, it does not matter that the whole thing might really be just a single CPU running a program. That is, it may not physically be separate literal hardware boxes, but rather logically distinct programmatic modules, or subprocesses running concurrently on a single CPU. Let’s let this AI run for five minutes, during which it interacts with its environment in some way (maybe it eats an ice cream cone). It claims to like the ice cream, but maybe the walnuts were a little soggy.

Now let us reset the AI and, having perfectly recorded the signals in its nervous system from before, rerun the scenario, effectively feeding it canned data about the whole “eating ice cream” experience. It, being an algorithm, reacts in exactly the same way it did before (the walnuts are still soggy). We could do this 100 times, and on each run the AI would be just as conscious as it was the first time.

But now, as engineers, we watch the low-level workings of the AI as it goes through this scenario, and we trace the execution paths of the various black boxes, subroutines, and software libraries. We notice that, during the entire five minutes, some libraries were never invoked at all, so we remove them. Same with certain utility subroutines. For other functional parts of the system, we see that they were originally designed to do a whole lot of stuff given a broad range of possible inputs, with different potential behaviors based on those inputs depending on a similarly broad range of possible internal states. But for the five-minute ice cream test, they are only ever in a few states, and/or are only called upon to do a few things, and given only a few of their possible input values. In these cases we carefully remove the capability of even handling those inputs that are never presented, or the internal state transitions that are never performed. We sever the connections between modules that never talk to each other in our five-minute test.

We may even be clever enough to intervene in the workings of our system during the test itself, erasing parts of the whole algorithm once they have been executed for the last time. We might also disconnect whole chunks of memory during the test, reconnecting them at just the moment the system needs them.

So now we have stripped down our original AI so that it would quickly fail any other “test” of its intelligence or consciousness. It is hardwired to handle only this ice cream situation. We have effectively lobotomized it, dumbing it down to the point where it could only function for this particular data set. We have removed the generality of our AI, cutting out so much of its capability, rendering it so special-purpose, that no one, upon examining it, could infer the functional organization of the original full-blown AI in all its glorious complexity. It can no longer pretend to be a faithful implementation of the functionalist’s AI, defined in terms of the black boxes that broadly do what they are supposed to. Our original black boxes just aren’t there anymore. Not only is the data canned, but the system that operates on that data is also canned. At this point, we are on a slippery slope toward something like Ned Block’s table-driven Turing Test beater.

But if we’ve done it right, the new, dumb system is doing exactly the same thing the original AI did, and it is doing it in exactly the same way. That is, not only is our dumb system behaving the same way to all outward experiences (walnuts still too soggy!), but all the internal flows of information and control are functioning as before, and even at the lowest levels, each individual machine instruction executed by the CPU is exactly the same, at every tick of the system clock, for the full five minutes.

We have two systems, then, side by side: our original robust AI, implementing the full high-level schematic with all the black boxes, and next to it, the dumb system that can only do one thing. But in this one scenario, both perform in exactly the same way, at the micro level as well as the macro. All the causal interactions among the relevant parts are behaving identically. The defining aspect of the black-box schematic is constituted by a whole cloud of potential behaviors, potential inputs, and potential internal state transitions, most of which we clipped off for our special-purpose system. What is it about unrealized potentials that imparts the causal or constitutive power that it must for functionalism to be plausible?

The defining characteristics of the functionalist’s black boxes disappear without a lot of behavioral dispositions over a range of possible input values, smeared out over time. But there is nothing in the system itself that knows about these hypotheticals, calculates them ahead of time, or stands back and sees the complexity of the potential state transitions or input/output pairings. At any given instant the system is in a particular state X, and if it gets input Y it does whatever it must do when it gets input Y in state X. But it cannot “know” about all the other states it could have been in when it got input Y, nor can it “know” about all the other inputs it could have gotten in state X, any more than it could know that if it were rewritten, it would be a chess program instead of an AI.

We, as designers of the system, can envision the combinatorially explosive range of inputs the system would have to deal with, the spreading tree of possibilities. But the world of algorithms is a deterministic one, and there are no potentials, no possibilities. There is only what actually happens, and what does not happen doesn’t exist and has no effect on the system. We anthropomorphize, and project our sense of decision-making, or will, onto our machines. In real life, there are no potential paths or states available to the machine. None that matter, anyway.

The black boxes that are definitional of a system to a functionalist are integrated, but only through individual specific interactions at particular moments. Whatever integration a system exhibits is purely functional, spread out over time, and takes the form of a whole bunch of “if…then” clauses. “If I am in state X and I get input Y, then do Z and transition to state F; else if…” I’m not saying that this type of “integration” is imaginary, just that it does not quite do justice to our intuitions about what “integrated” means. If you ask a module a particular question when it is in a particular state, it will give you the correct answer according to its functional specification. You can do a lot of complex work with such a scheme, but adherence to a whole mess of “if…then” clauses never amounts to anything beyond adherence to any one of them at any moment.

If a highly “integrated” system is running, and some of its submodules are not being accessed in a given moment, the system as a whole, its level of “integration,” and our opinion about the system’s consciousness could not legitimately change if those submodules were missing entirely or disabled. We ought to be very careful about attributing explanatory power to something based on what it is poised to do according to our analysis. Poisedness is just a way of sneaking teleology in the back door, of imbuing a physical system with a ghostly, latent purpose. Poisedness is in the eye of the beholder. A dispositional state is an empty abstraction. A rock perched high up on a hill has a dispositional state: if nudged a certain way, it will roll down. A block of stone has a dispositional state: if chipped a certain way with a chisel, it will become Michelangelo’s David. That, as the saying goes, plus fifty cents, will buy you a cup of coffee.

We have an intuition of holism. Any attempt to articulate that in terms of causal integration, smeared out over time, defined in terms of unrealized hypotheticals, fails. At any given instant, like the CPU, the system is just doing one tiny, stupid crumb of what, we, as intelligent observers, see that it might do when thought of as one continuous process, over time. To say that a system is conscious or not because of an airy-fairy cloud of unrealized hypothetical potentials sounds pretty spooky to me. In contrast, I am conscious right now, and my immediate and certain phenomenal experience is not contingent on any hypothetical speculations. My consciousness is not hypothetical—it is immediate. The term “if” does not figure into my evaluation of whether I am conscious or not.

Integrated Information Theory

IIT has gotten a lot of buzz recently. Proponents of IIT insist that it is not a functionalist theory, but I see it as the paradigmatic example of one. IIT claims to be able to quantify the degree of integration of a system in a variable called phi (Φ). IIT emphasizes reentrancy and feedback loops. All of this integration and reentrancy is functionally defined, however. The integration in integrated information theory is causal integration, smeared out over time, and attributes causal or constitutive properties to unrealized potential events and states.

Besides assuming that there is something special or magic about feedback as opposed to feed-forward signals in themselves, IIT relies upon potential actions and connections, by blunt assertion: if a module is missing or disabled, the phi of the overall system is decreased, but if the module is merely not doing anything at the moment, it still contributes to phi in some ghostly, unspecified way.

Worse, IIT bluntly asserts an identity between full-blown qualitative consciousness and phi (i.e. causal integration). It is a brute identity theory, albeit a functionalist one. IIT is the worst of both worlds. It fails to explain consciousness in a convincing way while cleaving to a materialistic worldview, but also takes consciousness seriously in the way the materialists say we shouldn’t. It’s like panpsychism, but less plausible.

Life Is Real. Isn’t It Defined “Merely” Functionally?

Couldn’t this argument be used to declare the concept of life out of bounds as well? After all, life is a quality that is characterized exclusively by an elaborate functional description, one that involves reproduction, incorporating external stuff into oneself, localized thwarting of entropy, etc. Life is not characterized by any particular physical implementation: if we were visited by aliens tomorrow who were silicon-based instead of carbon-based, we would nevertheless not hesitate to call them alive (assuming they were capable of functions analogous to reproduction, metabolism, consumption, etc.).

But according to the above argument, I am alive right now, even though our definitions of what it means to be alive all involve functional descriptions of the processes that sustain life, and these functional descriptions, in turn, are built on an ethereal cloud of hypotheticals. There is nothing in a living system that knows about these hypotheticals, or calculates them, so how can we say that, right here and now, one system is alive and another dead, when they are both doing the same thing right here and now, but one conforms to the functional definition of a living thing, and one does not? Therefore, there must be some magical quality of life that cannot be captured by any functional description. Yet we know this is not true of life, so why should we think it is true of consciousness?

Like so many other arguments, it comes down to intuitions about the kind of thing consciousness is. Life is, at heart, an ad hoc concept. The distinction between living and non-living things, while extremely important to us, and seemingly unambiguous, is not really a natural distinction. The universe doesn’t know life from non-life. As far as the universe is concerned, its all just atoms and molecules doing what they do.

People observe regularities and make distinctions based on what is important to them at the levels at which they commonly operate. We see a lot of things happening around us, and take a purple crayon and draw a line around a certain set of systems we observe and say, “Within this circle is life. Outside of it is non-life.” Life just is conformance to a class of functional descriptions. It is a quick way of saying, “yeah, all the systems that seem more or less to conform to this functional description.” It is a rough and ready concept, not an absolute one. Nature has not seen fit to present us with many ambiguous borderline cases, but one can, with a little imagination, come up with conceivable ones. It is useful for us to classify the things in the world into groups along these lines, so we invent this abstraction, “life,” whose definition gets more elaborate and more explicitly functional as the centuries progress. We observe behaviors over time, and make distinctions based on our observations and expectations of this behavior. So life, while perfectly real as far as our need to classify things is concerned, has no absolute reality in nature, the way mass and charge do.

This is not to denigrate the concept of life or to say that the concept is meaningless, or that any life science is on inherently shaky foundations. The study of life and living systems, besides being fascinating, is a perfectly fine, upstanding hard science, with precise ways of dealing with its subject. I am just saying that “life” is a convenient abstraction that we create, based on distinctions that, while obvious to any five-year-old, are not built into the fabric of the universe. Crucially, as we examine life in our world, every single thing we have ever observed about life is comfortably accommodated by this functional understanding of the concept, even if, strictly speaking, it is a little ad hoc.

To be a functionalist is to believe that consciousness is also such a concept, that it is just a handy distinction with no absolute basis in reality. I maintain, however, that our experience of consciousness (which is to say, simply our experience) has an immediacy that belies that. We did not create the notion of consciousness to broadly categorize certain systems as being distinct from other systems based on observed functional behavior over time. Consciousness just is, right now.

What If We Gerrymander the Low-Level Components?

What’s more, we can squeeze all kinds of functional descriptions out of different physical systems. Gregg Rosenberg has pointed out that the worldwide system of ocean currents, viewed at the molecular level, is hugely complex, considerably more so than Einstein’s brain viewed at the neuronal level. I do not think I am going out on a limb by saying that the worldwide system of ocean currents is not conscious.

What if, however, we analyzed the world’s oceans in such a way that we broke them down into one-inch cubes, and considered each such cube a logic component, perhaps a logic gate. Each such cube (except those at the very bottom or surface of the ocean) abuts six neighbors face-to-face, and touches 20 others tangentially at the corners and edges. Now choose some physical aspect of each of these cubes of water that is likely to influence neighboring cubes, say micro-changes in temperature, or direction of water flow, or the rate of change of either of them, and let this metric be considered the “signal” (0 or 1, or whatever the logic component deals with).

Now suppose that for three and a half seconds in 1953, just by chance, all the ocean’s currents analyzed in just this way actually implemented exactly the functional organization that a functionalist would say is the defining characteristic of a mind. Were the oceans conscious for those three and a half seconds? What if we had used cubic centimeters instead of cubic inches? Or instead of temperature, or direction of water flow, we used some other metric as the signal, like average magnetic polarity throughout each of the cubes? If we change the units in which we are interested in these ways, our analysis of the logical machine thereby implemented changes, as does the block diagram. Would the oceans not have been conscious because of these sorts of changes of perspective on our part?

What if we gerrymander our logic components, so that instead of fixed cubes, each logic component is implemented by whatever constantly changing shape of seawater is necessary to shoehorn the oceans into our functional description so that we can say that the oceans are right now implementing our conscious functional machine? This is a bit outrageous, as we are clearly having our chunking of logic components do all the heavy lifting. Nevertheless, as long as it is conceivable that we could do this, even though it would be very difficult to actually specify the constantly changing logic components, we have to concede that the oceans are conscious right now. Is it not clear that there is an uncomfortable arbitrariness here, that a functionalist could look at any given system in certain terms and declare it to be conscious, but look at it in some other terms and declare it not conscious?

Our deciding that a system is conscious should not depend on our method of analysis in this way. I just am conscious, period. My consciousness is not a product of some purported functional layout of my brain, when looked at in certain terms, at some level of granularity. It does not cease to be because my brain is looked at in some other terms at some other level of granularity. That I am conscious right now is not open to debate; it is not subject to anyone’s perspective when analyzing the physical makeup of my brain. Consciousness really does exist in the Hard Problem sense, in all its spooky, mysterious, ineffable glory. But it does not exist by virtue of a purported high-level functional organization of the conscious system. The high-level functional organization of a system simply does not have the magical power to cause something like consciousness to spring into existence, beyond any power already there in the low-level picture of the same system. As soon as we start talking about things that are “realized” or “implemented” by something else, we have entered the realm of the may-be-seen-as, and we have left the realm of the just-is, which is the realm to which consciousness belongs.


Reductionism and Emergence: What Kinds of Things Are There, Really?

We think that grass is green, that stones are hard, and that snow is cold. But physics assures us that the greenness of grass, the hardness of stones, and the coldness of snow, are not the greenness, hardness, and coldness that we know in our own experience, but something very different. The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself.
—Bertrand Russell

If you accept the hardness of the Hard Problem, you believe that it would be impossible, even in principle, to build and program a computer to be fully, qualitatively conscious the way we are. As we have seen, this reasoning extends to our own brains. Whether our substrate is zeros and ones in the form of electrical impulses in silicon, or wet gray brain matter, you just can’t get there from here, as the old joke goes. If we take the Hard Problem seriously, we have to bite the bullet of claiming that we are missing something big and fundamental in our conception of how the world is put together. No mere implementation detail will solve the problem of qualia. No one in a lab coat will discover some hitherto unknown neurotransmitter that will explain consciousness, nor will any systemic analysis of the brain reveal some higher-level organizational schematic that will show how the redness of red came to be.

No, to crack this nut, we have to go deep, and do a little philosophizing about the nature of our universe and the ways we have of thinking and talking about it. Science is great, physics is wonderful, but they specify particular results in particular situations, and there are more and less conservative ways of interpreting the claims of science. Moreover, there are methods of thought in science that can become, over time, philosophical commitments in their own right. Chief among them is the doctrine of reductionism.


Using an ingenious thought experiment, Galileo concluded that large objects must fall at the same rate that small ones do. First he imagined two rocks, roughly the same size, dropped from some height, falling at whatever rate rocks of their size fall. Then he imagined that the experiment was repeated, this time with the rocks tied together with a piece of string. Are we really to imagine, he wondered, that nature would regard the two rocks tied together as one large object, and make it/them fall at a different rate just because they were now connected with a string? He reasoned that nature would not. When does nature treat things as actual, individual things, and when does nature treat them as heaps, or aggregates of other things, like Galileo’s rocks tied together? And perhaps more importantly, in what sorts of situations would the answer make any difference?

For an honest hard-nosed reductionist, the universe is really a sea of quantum soup. There are no true inherent things, just one continuous mesh of cause and effect. Minds, and only minds, draw boxes and lines upon reality based on perceived regularities, chunking reality into mid-level murmurations, like “rocks” and “cars.” This chunking is an abstraction we impose, and is not there in the quarks, electrons, and photons. We could, in principle, see a certain number of molecules as a “rock,” or we could just see it as a bunch of molecules with no loss of accuracy or predictive power. Everything worth knowing about the rock is straightforwardly derived from the properties and interactions of the bits that make it up. It is something of a joke among philosophers that they sometimes argue over whether something is a table or just a bunch of molecules arranged in a tablewise manner. It’s not that tables and chairs don’t exist, just that the universe does not respect these “high-level” entities or any properties of them, as such, as it decides what to do moment to moment. All the universe needs to function properly is the very lowest-level entities and laws, and everything else pretty much takes care of itself.

“Reductionism” is a loaded term, and one that tends to get thrown around pejoratively. Daniel Dennett has said that at this point, “reductionist” means nothing more than “I don’t like that idea.” When I use the term, I will attempt not to make a straw man of it. Reductionism, very roughly, is the divide-and-conquer approach to understanding reality. It is the position that anything just is the sum of its parts. Sometimes philosophers like to say a thing is grounded in its parts, or supervenes on its parts. Once you have nailed down the behavior of the pieces, there are no more degrees of freedom left to the wholes that are made of them.

Reductionism combined with deterministic physicalism results in the claim that if you knew the exact initial conditions of the universe, and knew the true laws of physics, you could, in principle, predict everything that would ever happen during the lifetime of the universe, including the fall of the Roman empire and the Gettysburg Address. There are no big, large-scale things that cannot be understood (in principle!) in terms of their simpler, small-scale underlying constituents and their mechanisms.

Now, sometimes reductionism means methodological reductionism, which is simply the practice of analyzing things in terms of their components. Methodological reductionism, as an approach to scientific inquiry, has been spectacularly successful over many centuries. When I speak of reductionism, however, I mean it in a stronger, ontological sense. I mean the presuppositions that:

  1. everything in the universe is made of simple building blocks
  2. anything we choose to study may, in principle if not in practice, be defined and described completely in terms of the simpler building blocks of which it is made
  3. there is a finite (and small at that) number of types of these basic building blocks
  4. each instance of a particular building block is interchangeable with any other instance of that same building block (one electron is absolutely identical to another electron)
  5. these building blocks are entirely characterized by their functional dispositions (i.e. they have no qualitative essence, just behavior, such as that described by the lowest-level equations of physics)

There are a great many isms in philosophy of mind, many of them downright deceptive, in that their literal meaning does not suggest a doctrine held by most people to whom the label is applied (I’m looking at you, “dualism”). So in theory, whether you are a physicalist, a dualist, a monist, a dual aspect theorist, a qualophile, an eliminativist, an illusionist, a mod, or a rocker, I think this question cleaves the community nicely: do you believe that everything in the universe can be exhaustively characterized in terms of a small number of types of tiny things, all interacting via causal dynamics, which are described by a small number of mathematical laws? You can answer “no” and still make a case that you are a monist and, in fact, a reductive physicalist, but only by squeaking in on a technicality. Most good reductive physicalists, as the term is generally understood, answer with an emphatic “yes.”

My point is that this philosophical reductionism does not necessarily commit one to a particular scientific view. You can hold on to reductionism and admit that we still don’t have all the physical laws nailed down yet (strings? Unifying general relativity and quantum mechanics?). If we suddenly discovered that Harry Potter magic is real, we could still be good reductionists: How does it work? Take it apart, see what particles, fields, and/or forces make it up, and derive a small number of mathematical rules that describe their behavior, and viola!

So the difference is not which final theory you settle on, and exactly what primitives you admit into your lowest level, as long as they are few in number, are well behaved, and don’t have any “essences” lurking beneath that behavior. Indeed, it is really more of a spectrum of views than a sharp division. How many primitives can there be, how big can they get, and how unlawlike and complex can their behavior be before you just aren’t a reductionist anymore?

Many prevailing theories of mind incorporate some form of strong ontological reductionism, even ones that make a point of claiming to reject strict reductionism. I think, however, we have reason to doubt that reductionism in this sense gives us a true or complete picture of the world. The problem is that it works too well. If everything can be explained or characterized in terms of the lowest-level building blocks, there is no reason to consider higher-level things as having any objective existence at all, or at least, any explanatorily useful existence. As the saying goes, once the reductionist has broken down the universe, he has trouble building it back up again.

How can we have things in a reductionist universe? By things, I mean just what it sounds like: cars, dogs, planets, paper clips. Is a pile of sand a thing, or is it a lot of little things? Does a car count as a thing? It depends on how you look at it, and why you want to know. What things can there be whose existence (as individual things) is not just a matter of perspective in this way? And do we have any reason to believe that there are any higher-level things in the world that just are the high-level things they are, whether you look at them in the right way or not?

If we are reductive materialists, then, speaking absolutely objectively, there is either only one (extremely high-level) thing in the entire universe (the universe itself), or there are as many (extremely low-level) things as there are subatomic particles. There is no absolute reality to any intermediate-level things as such. Their existence and all their properties are may-be-seen-as, which is to say, derivative.

It does not buy you anything (in terms of imparting thinghood) to declare certain systems as unitary wholes on the basis that they are isolated from their surroundings, because everything interacts with everything else all the time. This is not New Age mysticism, but simple fact. The force of gravitation between any two objects is proportional to the product of their masses and inversely proportional to the square of the distance between them. This number is never zero for any two objects, no matter how small the masses or how great the distances involved.

I once read somewhere that the gravitational effect of an electron on the trajectory of a molecule of gas a universe away is such that, after being amplified by about 50 collisions with other gas molecules, this tiny gravitational nudge is enough to cause the gas molecule’s position to be off by the width of an entire molecule. This, in turn, determines whether or not the molecule collides with the next molecule at all or misses it entirely, a difference which quickly changes the dynamics of the entire volume of gas. Whether the correct number of collisions before this happens is really 50 or 50 million, there is some finite number for which this must be true. All particles in the universe interact causally with all others all the time (the contents of black holes possibly excepted).

But still, one might argue, there are some things which act more or less together as one, and are separable from their environment. Consider a toy truck. It seems thing-like if anything does. But just as the computer program does not “know” anything but the current machine code instruction, the truck is just made of atoms, each of which does not know or care anything about “truck” as opposed to all the other atoms that are “non-truck.” Each atom only “knows” about the forces that act upon it, and each reacts accordingly. Each atom would still behave the way it does under the influence of any equivalent immediate environment (local to just that atom, that is) whether that environment was the result of that atom’s participation in what we might be inclined to call a “truck,” or some other, completely different system, as long as it presented the exact same interface to the atom. The atom does not act the way it does because of some high-level organization of the system of which it is a part. A complete knowledge of the forces acting immediately upon each atom in the truck is all that is necessary to have complete and perfect knowledge of the patch of reality that we call “the truck.” It gives us complete predictive power over all the atoms involved, at any level of detail you like. In a completely objective reductionist universe, there is nothing to know about the truck above and beyond all these atoms.

Once we have a complete causal picture of a bunch of atoms, we are certainly free to posit mid-level things as a cognitive convenience, but they’d better not, as such, have any causal powers. Otherwise, we wind up with what the philosophers call an overdetermined world, and William of Occam warned us about those. We use his razor to cut out any multiplied explanations. One is enough, thank you very much.

Downward Causation

Sometimes people speak of “downward causation,” which is not merely causation in the physical direction of down, like rain or snow, but causation from the “high levels” to the “low levels.” That is, in a system made of interacting parts, the system as a whole has causal effects on its own parts (in a top-down fashion) beyond what we can account for in a purely bottom-up analysis of the parts themselves and their individual causal powers. In real life, this makes no sense.

We, as human engineers, may model a device in our minds, then design a system and implement that design in our workshop using a bunch of parts. While it is easy to think of the parts as actively participating in the “whole system,” and doing what they do because of their participation in the “whole system,” the parts are still blind, stupid, and amnesiac. They do what they do under the same influences as they would if they weren’t part of a system. The individual parts do not know or care anything about any larger system. Of all the forces impinging upon a given part, these forces could be the cumulative effect of some complex system, or they could be purely local. The part does what it does as a result of the influence of these forces either way. Almost no one, when push comes to shove, actually makes a contrary claim. In a universe of big things made of little things, the little things call the shots. Physics and stamp collecting, as Rutherford put it.

I should emphasize that the ability, for example, to reduce chemistry to physics is an in principle reduction only. No discoveries in the field of physics will ever render chemistry (or biology, or sociology, etc.) obsolete as fields of legitimate inquiry. Even in a universe in which reductionism is absolutely true, the physical world is hugely complex, and its complexities explode out of control very quickly in a chaotic fashion without any hope of being modeled at the low levels by beings with our limitations. It will always be astronomically easier to deal in terms of higher-level chunks of reality than in subatomic terms for almost all purposes (meteorologists, your jobs are safe for the foreseeable future). Nevertheless, in principle, if you could model reality at the low level in a reductionist’s universe, that would be all you would need to derive any measurable fact about that universe. Any higher-level chunking of reality is a cognitive convenience. Put differently, the universe has no need for any “high-level” things or concepts as it clanks along one moment to the next. All the causal heavy lifting is done at the lowest level.


Talk of downward causation is closely related, if not identical, to teleology. Aristotle wrote about causation, and he divided it into categories, the only ones of which anyone remembers are his first, efficient causation and his last, final causation. Efficient causation is the kind we deal with when we speak of billiard balls colliding. A causes B because A came first, and straightforwardly exerted a causal influence (pushing from behind, as it were) and brought about B. Final causation, in contrast, has to do with goals and purposes. Telos is the Greek word for such future states of affairs and the effect they have, drawing things forward, pulling from ahead. The telos of an acorn is to become an oak tree.

There is a subtlety here, however. Aristotle was talking about causation as it manifested itself in events, spaced out in time: A causes B. Here, though, we are talking about things as they exist in a snapshot: more constitutive causation than sequential causation. The point, however, is the same. The steady march of scientific progress for centuries has been characterized as the banishment of teleology from serious discourse. Anyone who invokes final causes is speaking poetically or magically (the giraffe has a long neck so it can reach the leaves). A ton of molecules, some of them DNA, banging around for eons, subjected to constant Darwinian winnowing, have the effect of seeming like teleology, that’s all. Survivable, adaptable systems survive and adapt, and the ones that don’t, don’t. For the most part, we know we are exercising a bit of literary license when we speak as if the acorn wants to become an oak tree. By the same token, we should feel funny if we say that a particle behaves differently because it is part of a larger system. I’m not saying that we should never speak in teleological terms. As a panpsychist, I am comfortable getting a little freaky, but we should know that we are saying something freaky when we speak of these kinds of powers.


It is sometimes said that higher-level properties and thus higher-level things emerge from the lower levels in a way that is not determined or even suggested by the lower levels. The flock emerges from the motions of the individual birds, and liquidity emerges from the actions of trillions of H2O molecules. The claim that there is genuine emergence in the world is often contrasted with reductionism.

There are several flavors of emergentism (and the closely related theories of so-called nonreductive physicalism), but most of them do not dig their way out from under reductionism as they claim to do. This is because emergence usually reflects nothing more than a cognitive limitation on our part. We are just not smart enough to infer the liquidity directly from a complete knowledge of the H2O molecules. There is no objective, measurable property of a bucket of water (including facts about the liquidity of the water) that one could not, in principle, infer given:

  1. a complete and perfect description of each atom of hydrogen and oxygen in the bucket (i.e. a complete set of initial conditions)
  2. a complete and perfect set of physical laws that described the behavior of hydrogen and oxygen atoms through time as they interacted
  3. the vast cognitive power it would require to model all those atoms and calculate their interactions

In general, we are stupid—it is easier by far to frame our understanding of the world in high-level terms, to understand “water” as “sloshing” in certain ways, and even to come up with precise laws about the ways in which water sloshes. But this is just a shorthand way of describing what is actually the aggregate motion of trillions of molecules. This shorthand description does not tell us anything that could not, in principle at least, be derived from the trillions of molecules themselves—its advantage is that it is so much easier to deal with. As David Chalmers has pointed out, emergence is a psychological concept: it is a measure of our surprise at the consequences of low-level natural laws, not a fundamental truth of nature in its own right. Emergence is a reflection of our faulty intuitions, perceptions, and/or cognitive powers. There are no high-level facts or properties that “emerge” only at the high level. A bumper-sticker slogan sometimes invoked by emergentists is “more is different,” but actually more only seems different.

It is, perhaps, a tacit recognition of the fact that emergence is somewhat weak tea when it comes to explaining the universe around us, that in recent years it has been rechristened “weak emergence.” This also distinguishes what I’m talking about here from so-called “strong emergence,” which is a whole different kettle of fish.

More to the point, emergence (invoked in this way) strikes me as an attempt to dodge the Hard Problem by paying lip service to the idea of qualitative (or qualitative-adjacent) essences (like the liquidity of water, and “higher-level” properties in general) but placing the problem out there in the world, when it is really in here, in our minds. There is no liquidity in the world, except that which is directly inferable from the actions of the H2O molecules (in which case the “emergence” of liquidity melts away as a concept capable of explaining anything), but there is a wetness quale in our minds.

The problem that emergence tries to solve (or at least articulate) is the Hard Problem that dare not speak its name. Proponents of most forms of emergentism and nonreductive physicalism are trying to straddle the fence. On the one hand, they have some inkling that strict reductive physicalism is inadequate to account for the universe as presented to us, but on the other hand, they are unable or unwilling to make the freaky metaphysical commitments (to bite the bullets, as it were) that are necessary to address these inadequacies. They don’t want to have to build any magic into the ground floor of their universe, so they try to slipstream it in somewhere in the middle. The sad truth, however, is that we need real magic here, and all mid-level things in a reductionist universe are only may-be-seen-as kinds of things. The only magic you can slipstream into the mid levels, then, is may-be-seen-as magic.

Scene from the movie The Matrix In a purely reductionist universe, with no absolute thinghood above the subatomic level, no natural mid-level principles of individuation, and everything just more or less dense patches in the quantum soup, I imagine that the mind of God is like that of Neo at the end of the movie The Matrix. If you have not seen it, I urge you to do so—it is great fun and very well done, and touches on some themes that are relevant to these discussions (to quote David Chalmers again, don’t bother with the sequels).

Much of the action in the movie takes place in an extremely realistic computer simulated reality (“the matrix” of the title). While the characters are really comatose in reclining chairs or nutrient bath pods with data feeds plugged into the bases of their skulls sometime in the distant future, they perceive themselves to be walking, driving, fighting, etc. in late 20th century America. At the end of the movie, the hero, Neo, has an awakening while in the matrix as he confronts the sinister Agents who want to kill him (dying virtually while in the matrix results in actual physical death). The final confrontation had a great special effect, in that it captured the essence of an inherently non-visual idea and did so simply and clearly. Neo sees the outlines of the floor, the walls, the ceiling, and the three Agents, but all their surfaces from his point of view are a wash of iridescent green computer characters, the same ones that were on the screens in the matrix’s monitoring center back in physical reality. Neo sees through the matrix, stops accepting it on its terms, and sees straight down to the level of the data of which it is made. And of course, this essentially makes him God within the matrix.

In a reductionist universe, God (if there were God in a reductionist universe) sees everything this way. His mind tracks every last neutrino with perfect accuracy, and He does not have to use our shortcuts of chunking patches of reality into “whale,” “bridge,” “apple.” It is only as a consequence of our own perceptual and cognitive limitations that we find it necessary to chunk the universe into “flocks” or even individual “birds.” In real life, there are no higher levels. The universe, to a reductionist, models or computes itself at the lowest of all possible levels. Once all the hydrogen and oxygen atoms follow their basic laws, there is neither any need nor room for any further laws about “liquidity,” “transparency,” or any other high-level properties of water in order for the universe to “know” how water should behave instant to instant. The universe crunches along, doing what it must, not because of any patterns or any way in which such patterns are organized, or because of their purported complexity, but because the particular particles with their particular positions and momenta must do what they must do. “Patterns” are a way of categorizing reality for us, a way of setting up a taxonomy of classifications of what are ultimately physical systems. You can’t possibly get any magical new properties to “emerge” out of a collection of stuff because it is “complex,” above and beyond what you would have gotten out of that same collection of stuff anyway. Anything that is really, really there at the high level must have been really, really there at the low level.

If, that is, we are committed reductionists.

How Naive Is Our Naive Realism about Our Mid-Level Chunks?

There is nothing wrong (in the sense of being incorrect) about our mid-level chunking of reality so we can avoid being eaten by tigers, forage for grubs, etc., any more than there is anything wrong with seeing an apple as red.

Philosophical realism is the claim that the world out there is pretty much as it seems to be. In particular, realism about X is the claim that if X seems a certain way, it’s because X is actually that way. If that sounds vague, there is a reason for it—realism can be taken in a variety of ways. Realism, often modified with “naive,” is a position of taking things at face value, and not overthinking them. Naive realism about experience means that if I see something that looks like a red apple, that conscious event corresponds to an actual red apple in the real world. The apple appears red to me because it really is red, period. The apple reflects photons of red light, and they get absorbed by my retinas, and my brain faithfully registers the information that there is a red apple in front of me. Naive realism takes the mind to be merely reflecting the reality out there.

Naive realism in this example is not true, however, because of course there are no red photons. That is, while things seem red to us, all that really strikes the retinas in the backs of our eyes are photons of certain wavelengths. These wavelengths are just numbers representing a particular periodicity that the photons display. There is nothing in those numbers that suggests redness as we experience it. The association between wavelengths of light in a certain range and redness is one our minds make up out of whole cloth. Color is just the mind’s way of representing different wavelengths of light, but we could have evolved to use some completely different representation with no loss of information about the real world.

Consider the inverted spectrum argument. If someone were born with their optic nerves cross-wired in such a way that when they were shown red it looked green to them and vice versa; so that in effect their perceived color wheel were rotated by 180 degrees, they might never know it. They would receive the same information about the world, and they would learn the color names as a small child, and they would agree that a sunset is a deep orange, but it would not really look orange to them the way it does to you. It would look teal, but they would call it “orange.”

The inverted spectrum argument is usually made to convince people of the distinction between cognitive information and ineffable qualia: my inverted spectrum twin has the same information about the world that I do, but entirely different qualia. I am using the scenario to make a different point, however. It should be clear that, given me and my inverted spectrum twin, there is no fact of the matter of which of us is seeing the “right” view of the world. There are photons, there are perceived hues in the mind, and there is a correspondence between the two. The question of what is the “correct” correspondence between the two just doesn’t make sense, since in both cases the actual mapping is arbitrary.

Color as perceived—that is, full-blown qualitative, experiential color—serves as a very good carrier of information that comes into our bodies by way of photons striking the retina, but one could speculate on other ways. Perhaps some alien species could consciously discriminate between all the wavelengths of color that we do, but perceive them through some sort of tactile radar sense, or some other sense modality we cannot even imagine. Similarly, while the sensation of redness conveys certain information to us in our visual field, that same sensation could conceivably convey different information. Perhaps our sense of smell could be wired into some perceptual field of color, for example.

If there are no red photons, and color exists only in our minds, what about sounds? By a similar argument, there is no middle C “out there” as it sounds to us in our mind’s ear. There are just periodic pulses of fluid pressure. Hot and cold are just the aggregate motion of huge numbers of molecules and similarly could conceivably be represented in our minds with completely different qualia. The same could be said of pressure against skin, smell, and taste. Our qualia are only in our minds, and they are created there.

So at this low level of the qualitative sensory aspects of our world, naive realism is false. Assuming that we can claim to know something about the real world, that the world as we experience it internally is in some way like the world out there, at what level of abstraction does realism start to become true?

I would like to suggest that realism is false at a higher level of abstraction than we generally assume. That is, more of the things we think we perceive about the world are created in our minds than we acknowledge. The real world (almost certainly) exists, and its reality constrains what we perceive, but does not determine it. Most of the structures, patterns, and dynamics of the world are “really” out there and exhibit a lot of the regularities we think they do in the same sense that photons of certain wavelengths are really out there. But as with the redness of those photons, the ways in which we experience them are not really out there. Things are abstractions. We create all things, we infer unity and mid-level individuation in the world.

Seen in this light, consciousness has a much bigger job than just painting the apple red. It must create reality much more broadly, including the apple itself. Just as there are no red photons, there are no rocks, cars, dogs, or numbers. Nature presents us with a wash of particles, a continuous flux of quantum stuff, and we overlay this flux with stories about cars and rocks. Moreover, this story, and the way we create it, is not “merely” cognitive, not just one of Chalmers’s “easy problems.” There is as much a what-it-is-like to think of an apple as there is to taste it. If we end up deciding, as I have (stay tuned for the next chapter), that there are inherent, mid-level things in our phenomenal consciousness (and not just qualities like redness, saltiness, and itchiness) and these mid-level things are just as irreducible to parts as the redness, then we have a big ontological bullet to bite indeed.


The All-At-Onceness of Conscious Experience

…however complex the object may be the thought of it is one undivided state of consciousness.
—William James

Way back in the first chapter, we looked at the Hard Problem, which for people of a certain temperament is a bit radical in its implications. This is the idea that you can never account for the redness of red with a story about causal bonking alone, no matter how much you dress up the bonkings with fancy words like “refer,” “algorithm,” or “information.” Trying to fix this problem by looking at the bonkings from a “high level,” or collecting them into black boxes, as functionalism does, does not work either. Given the reality of phenomenal consciousness, this is troubling beyond the problem of explaining consciousness, because it tells us that any framework for understanding reality (like physics, as currently construed) that consists, at bottom, of a story about causal bonkings is at best incomplete. The redness of red gives us a counterexample to the causal bonking story.

I hope by now that you accept all of this, and that you agree that the Hard Problem is a real problem. If so, you are in reasonably good company—lots of philosophers are troubled by the redness of red. The redness of red is just the tip of the iceberg, however. Some physicalists brush the Hard Problem aside as a mere intuition. Maybe it is, but for reasons I have already covered, it’s an awfully compelling one. In this chapter, I ask you to accept an idea that seems, at first, even kookier, and one that is a bit more abstract, but one that I think is just as compelling. In fact, it is really just a logical extension of the original redness-of-red Hard Problem. If you accept one, you should accept the other. Same bullet, maybe another set of tooth marks.

As we encounter things in the world around us, when do we judge something to be just a heap or aggregate of smaller things, like a pile of sand, and when do we judge it to be a true, unified, single thing? It depends, almost always, on how you look at it. When we look at the world in strict reductionist terms, nothing above the subatomic level really counts as a holistic thing. Are there any things above the micro level that really are inherent, single things in a way that does not depend on how you look at them? Do we have any reason to believe that there are, in contrast to the reductionist view, inherently unitary mid-level things in the universe? Put another way, if some philosophical pedant says, “That’s not a table, it’s just a bunch of atomic matter arranged in a tablewise fashion,” are there any things in the world (excepting the atomic components themselves) of which this is simply not true? Is there anything you could show the pedant and say, “No, that is just absolutely a table”? We do, in fact, have evidence of such things, and the evidence, as with the redness of red, is our own phenomenal consciousness.

Mucha's Job cigarette paper poster I have an art nouveau poster in which a woman is smoking, and there is a stylized curl of smoke rising from her cigarette. When I look at that languid asymmetrical curve, I see the continuous curve in its entirety, all at once. I do not just have some kind of cognitive access to the fact of the curve. The parameters of the curve are not just available to me upon making certain kinds of inquiries. I do not just have a pointer or reference to a lot of data beyond my view that yields results pertaining to the curve when evaluated. The details of my perception are not just at my fingertips, but bam! right there, live, all at once. I see the whole curve now. This is every bit as undeniable as the redness of red. However you might nibble at the edges of my perceptual field, there is a wholeness to that curl of smoke that is manifest before me, in a qualitative way.

In contrast, of an intelligent computer with its video monitor aimed at the curve (LDA, STA, JMP…), all we can say is that at some level it may be thought of (by us) as seeing the curve. That is, given an abstract understanding of its algorithm and data structures, one may interpret the functioning of the machine as “seeing” the curve. This, however, is anthropomorphizing on our part, albeit on the basis of the computer’s deliberately programmed design.

As with seeing red, any claims about what a computer could or could not perceive apply equally to my zombie twin. We have no principled way of saying that the zombie sees the curl of smoke, in spite of its ability to answer questions about it, report on it, or makes claims that it sees it. We are faced with the same failure of entailment, the same explanatory gap as we were with the zombie seeing red.

There is, in contrast, nothing “may be thought of” about my seeing the curve. It is not a matter of interpretation. It is an absolute fact of nature that I really do see that curve all at once, before me. Seen at the low level, as an ant-like CPU crawling over data gravel, there is no inherent sense in which “it all comes together” for a computer, whereas there is an inherent sense in which it all comes together for me.

This is not just another “I see red, the computer will never see red” argument (although it is related). The “seeing red” arguments focus on qualitatively rich but nevertheless cognitively simple aspects of experience. I am talking instead about our ability to have cognitively complicated scenes before us in our mind’s eye, to see the complex as one thing, all at once in its entirety: e pluribus unum. Assuming we take the Hard Problem seriously, as we think of the sorts of mental phenomena that compel us to do so, we often cite the good old redness of red, tickles, itches, pains, saltiness, etc. as paradigmatic qualia. But once we go that far, this wholeness-quale is just as troubling. It is just as much an essence as the taste of ice cream is in the Hard Problem sense.

I would like to distinguish this unity of conscious percepts from the so-called binding problem, however. The binding problem refers to the fact that, for example, the visual processing parts of the brain and the auditory processing parts are quite different, and in fact take different amounts of time to do their jobs. In spite of these facts, we can have a single experience that incorporates elements from several senses at the same time, and they are synchronized. The binding problem is fascinating in its own right, but what I am talking about here is, I think, at least as fundamental. I am concerned not so much with the way in which different sense modalities (vision, hearing, smell, etc.) can be bound together in a single percept, but how anything at all, even within a single sense modality, can have the kind of unity it does. This qualitative gestalt is every bit as strange and inexplicable as the redness of red. Even my fellow qualophiles do not pay enough attention to this.

It could be argued that my percept of a tree is not an indivisible whole: you can break it into parts (leaves, branches, trunk). But that only means that I have a tree percept, and then, often by effort of willful analysis, I have a subsequent follow-on percept of tree parts, albeit possibly with tendrils of reference reaching back to the original unitary tree percept. Just because a cathedral is made of stones, it does not follow that my conception of a cathedral is made of my conception of stones.

My percepts are immediately, manifestly unitary whole things. Regardless of the cognitive or physiological mechanism which supports them, they serve as a counterexample to the doctrine of ontological reductionism. I know I perceive my percepts, and that those percepts really are whole objects, just as certainly as I know I see red. Things, in my mind, are qualia, as are all abstractions—manifestly before me, all at once. Out in the world, there may be only a bunch of atomic matter arranged in a tablewise fashion, but in my mind, if only in my mind, there is a table.

Consciousness gives us not only examples that there are such things as qualitative essences in the universe, but also that there are such things as things. This claim may strike some people as a case of comparing apples and oranges. “Just because you perceive something as an inherent whole doesn’t mean it actually is an inherent whole,” one might be tempted to argue. “You are just interpreting it that way.” But it is the percept itself, the interpretation (if you must call it that), not the thing out there in the world that is being perceived, that I am talking about. “Your percept only seems like a unified whole” is analogous to the claim that “electromagnetic radiation at around 430 THz only seems red.” It is that seems part that we have to explain. My claim here is that the seeming of unity must itself be a unity.

If these percepts are things in the deep, inherent sense that I claim, how many are there? What are their boundaries? Is that stylized curl of smoke really itself a thing, or part of the larger percept of the poster as a whole, or some even bigger percept, one that is connected by filaments of association to others, or memories, or who knows what?

I don’t know (yet). There is more work to be done to answer these questions. The point here is that we only need a tiny bit of unity to break the reductionist’s claim. Maybe I don’t really see as much or as clearly beyond the center of my field of vision as I think I do, maybe I am susceptible to all kinds of illusions that demonstrate how hard it is to specify the edges of my percepts, but as long as I see any of that curl of smoke, we need to say how that could possibly be.

We must take first-person experience seriously, both in the seeing red case and in the case of the unity of our percepts. Both (and perhaps more besides) must be explainable in any final theory of nature we concoct. Such a theory must include principles of individuation that allow for the mid-level things that are my percepts. Gregg Rosenberg (2004) discusses this quite a bit (although from a somewhat different perspective). To use his term, we need a theory of natural individuals.

So when I claim that these unities, however they are demarcated, are some kind of true, inherent, fundamental unities, am I talking about, well, physics? Yes I am. As with the redness of red, I think we need to go all the way down with this. Otherwise we are left with some kind of functional unity made of blind causal bonking, and that would not give us the qualitative sense of that curl of smoke as a curl, any more than it would the color of the poster’s background.

There are inherent, absolute things above the level of the quark, but below the level of the whole universe itself. These mid-level things may only exist in our minds, but that is enough to say that they do exist. Like my seeing red, these things in my mind cannot be illusory. If it seems that there are mid-level unitary things among my percepts, then those seemings themselves must be mid-level unitary things. For my unitary percepts to manifest themselves to me as they do, they cannot just consist of smaller parts integrated only through causal dynamics, bits bonking blindly into other bits, with some sort of functional description emerging from the bonking, any more than the redness of red can. Whatever the crumbs are out of which the universe makes everything else, these things count among them, rather than things built out of the crumbs. They are just bigger crumbs than the kind we are used to.

I want to emphasize that when I say that my conscious perceptions are “mid-level” things, I am talking about the scale (between quark and universe), and definitely not implying that these things occupy some middle level of a tree of organization. In that sense, the whole point is that these are low-level things. They are big and complex, yet they must count as primitive objects. They can’t be exhaustively characterized in terms of any lower level of description or analysis. There is certainly a huge number of possible conscious percepts—quite possibly infinite. All this being true, we live in a universe in which there is a huge (possibly infinite) number of fundamental components, these components have qualitative essences, and most of them are big and rich, not tiny and simple. Any formulation of reductionism that could accommodate these facts would hardly be worthy of the name.

In the last chapter, I characterized reductionism as consisting of these premises:

  1. everything in the universe is made of simple building blocks
  2. anything we choose to study may, in principle if not in practice, be defined and described completely in terms of the simpler building blocks of which it is made
  3. there is a finite (and small at that) number of types of these basic building blocks
  4. each instance of a particular building block is interchangeable with any other instance of that same building block (one electron is absolutely identical to another electron)
  5. these building blocks are entirely characterized by their functional dispositions (i.e. they have no qualitative essence, just behavior, such as that described by the lowest-level equations of physics)

What I am saying in this chapter is that our percepts and thoughts, whatever else we may say about them, definitely violate #3, #4, and #5. We are left with a baroque universe, with a huge mess of primitive components, a lot of which are big.

It is worth noting, however, that this view is nevertheless reductionist in a sense. Everything in the universe may well be reducible in principle to its component parts—it is just that there is no small number of such fundamental components in the universe, and a lot of those fundamental components are pretty substantial things in their own right. The important respect in which it still counts as a form of reductionism is that under this view, you do not get anything out that isn’t there in the lowest levels. Specifically, this view does not posit any magic “emerging” from a system on the basis of its “complexity” or functional organization. Complexity and functional organization, defined in causal terms, smeared out across time, and dependent on lots of hypotheticals, don’t confer the kind of inherent, just-is, really-there kind of qualitative essence we need to account for the redness of red, or the manifest all-at-onceness of our percepts.

Neuron Replacement Therapy

There is a popular thought experiment that goes like this. Suppose that neurologists characterized each neuron’s inputs and outputs exactly, and were able to engineer a functional equivalent. That is, an artificial device whose inner workings may or may not be similar to those of a natural neuron, but whose behavior, seen in terms of its responses to inputs, was identical to that of a neuron. Now suppose that the neurons that comprise your brain were replaced with these artificial neurons, one by one. Once your entire brain was cut over to the artificial neurons, you should have a brain system whose functioning at the neuronal level was identical to that of the brain you were born with, but whose workings were entirely artificial and, as such, able to be characterized with an algorithm of some sort.

This thought experiment (called Neuron Replacement Therapy, or NRT for short) is intended to put anti-physicalists and anti-functionalists like me in an uncomfortable position. We either have to say that the resulting artificial brain is not conscious (and if not, we must say at what point in the gradual neuron replacement consciousness disappears, and when it does, whether it winks out all at once or fades away gradually), or we must admit that the artificial brain maintains its consciousness, and therefore full-blown consciousness is realizable by a machine.

Thoughts Are Evidence of Mid-Level Holism

I agree that there is nothing magic about organic or biological systems. There is no reason that consciousness must be manifested in a biological system. Indeed, as a panpsychist, I think that consciousness in some form is likely manifested all the time in all kinds of matter. The problem with the thought experiment is that it begs the question—it presumes exactly the functionalist reductionism that is, in my opinion, at the heart of the matter. It assumes that what makes a brain a mind does so purely by virtue of the complex interaction of lots of blind little autonomous parts, each not knowing or caring about the others, as long as each has the right interface presented to it. No one knows the details of the relationship between neurons (as neuroscientists characterize them) and consciousness, but thoughts come whole, nose to tail. A given percept, thought, or moment of consciousness is what it is in its entirety, all at once, or not at all. It has no parts, so you cannot swap some of its parts out in favor of “functionally equivalent” parts.

Functional Organization Can’t Solve Panpsychism’s Combination Problem

Even if a thought or percept is an example of some kind of fundamental holism occurring in nature, couldn’t it still be generated in some way by the orderly, lawful interactions of smaller parts? Possibly, in some sense, but it could not turn out to simply be the orderly, lawful interactions of smaller parts, not in the way the hurricane just is the the orderly, lawlike interactions of trillions of water and air molecules. The interactions of parts may functionally emulate a percept, and they may support it somehow, or give rise to it causally, but they alone cannot be it.

Assuming that there will, ultimately, turn out to be necessary relations between the physical world as we understand it and consciousness, the physical correlates of consciousness would have to display or allow for the kind of holism that our thoughts manifest. This has implications beyond the physicalists’ arguments about NRT. Even some of my fellow travelers in panpsychism seem to shy away from this conclusion and its implications, but panpsychism’s combination problem cannot be solved by functional organization alone. Even if the quarks are seeing red, feeling pain, or craving transcendence like crazy, any aggregation of them cannot be a basis for larger-scale consciousness if that aggregation is achieved through billiard ball bonking. The “integration” or “high levels” you can get out of causal poking, over time, characterized in terms of unrealized hypotheticals, can’t give you the intrinsic all-at-onceness we experience, no matter how hard the quarks are rooting for us.

I want to be clear about the bullet I am biting. I think epiphenomenalism is wrong—qualitative consciousness has observable, causal powers in the physical world. Moreover, it has an inherent, indivisible unity, which is at least as weird as its qualitative aspect (that old redness of red). We either have to be orthodox physicalists, or we must embrace some freaky holism at work in the world: really-there holism, not just may-be-seen-as holism, holism that has causal implications that somehow have escaped the notice of the people in the white lab coats.

Physicalists hate this sort of thing—I have an intuition of qualitative properties, or holism, and on the basis of that intuition I make huge claims about the fundamental building blocks of the physical universe. Oh, and by the way, these claims entail causal happenings that ought to be empirically falsifiable. I appreciate the distaste for this, but I don’t see how I could explain away these “intuitions” without making such claims. To twist the slogan of the New York Times, I am here to draw all the conclusions fit to draw, without fear or favor.

I am placing my bets on there being something in the physical world that manifests this, something causal that exists as a whole at a much larger scale than an electron. I am insisting on something that violates the apparent causal closure of physics, or at least bends it quite a bit. Where in the physical world might we find this kind of inherent wholeness, as opposed to the just may-be-seen-as wholeness that functional analysis of systems of parts gives us?

Quantum Mechanics

It has been said that the reason that so many people relate consciousness to quantum mechanics is a sort of conservation of mysteries: consciousness is mysterious, and quantum mechanics is mysterious, so maybe they are the same mystery. While the connection between them is admittedly circumstantial, they are mysterious in similar enough ways that we may speculate that at the very least quantum mechanics is a promising place to look for consciousness in the natural world (see Seager (1995) for a similar line of speculation).

First, we seek a place for consciousness at the very lowest levels of nature. As I’ve already argued, you can’t build it out of the causal dynamics of the lower levels bonking into each other. Taking the Hard Problem seriously, I claim, forces us to be panpsychists, and that means putting consciousness (or something that scales up to consciousness as we know it) way down on the ladder of stuff in the world. Quantum mechanics is the lowest rung on the ladder, as low as our understanding of the natural world goes. It is the layer of inquiry at which we know only the behavior of the things we study, but we cannot, in principle, know the intrinsic nature of whatever is doing the behaving. No one knows what an electron really is, beyond our ability to characterize its extrinsic behavior as described by the relevant quantum laws. It is at this level, following Russell, that we ought to find consciousness.

Second, and more to the present point, at least as striking as the qualitative nature of consciousness (what is it like to see red?) is the all-at-onceness of our thoughts and perceptions, their intrinsic unity. Quantum mechanics gives us some counterexamples to the orthodox reductive physicalist way of seeing everything big and complex as (mere) aggregates of tiny simple things. The very strange world of quantum mechanics is populated by bunches of things that come together to form one larger thing that can really no longer be thought of as a heap of separate components. In a quantum entangled system consisting of two particles, for example, we have multiple parts coming together to form a thing that is inherently, absolutely, one single unitary thing, whose behavior is described (and plausibly could only be described) by a single Schrödinger wave function.

Over the decades, there has been a fair amount of hand-wringing over the limits of this quantum holism. Is there just one big wave function for the entire universe? What really separates a system under study from its environment at the quantum level? These questions have not been answered to this day, but there is no doubt that some kind of real, ineliminable macroscopic holism exists out there in the physical world.

As with our percepts, a quantum entangled system is one thing, not an aggregate that may be seen as a thing when looked at or analyzed a certain way. The ontological reductionism inherent in a classical or Newtonian view of the natural world means that consciousness cannot find a home in a world that is exhaustively described by such a view. Because quantum mechanics sidesteps this reductionism by providing a real basis for holism in the universe, by process of elimination, we ought to strongly suspect that consciousness and quantum phenomena are somehow related.

This idea is a version of what is called strong emergence, where truly new stuff comes into existence when you arrange low-level things in certain ways, as opposed to the weak emergence of the flock from the birds. See Silberstein (2001) for a discussion along these lines. (With regard to consciousness, then, I am a strong emergentist, although I’m not sure I love that term. It seems a bit woolly and open-ended, and the term “emergence” might not quite capture what is ultimately going on. Moreover, it is still, in spite of the qualifying adjective, a little too adjacent to its weak sibling for my taste, given that I think they are fundamentally different phenomena.)

Third, there is the problem of the alleged causal closure of the physical world, and the way quantum mechanics, and the holism it implies, allows us to wiggle out of it. The argument is often made that the laws of physics are airtight, that (assuming they are true) they account completely for everything that happens in the world, leaving no room for consciousness to have any measurable effect on anything. Unless, that is, you define consciousness strictly in terms of physical dynamics in the first place, which is to say that you subscribe to physicalism (and thus, in my opinion, define away the interesting questions and properties of consciousness).

Sean Carroll (Goff & Moran, 2022), for one, bangs this drum a lot. He gets a bit exasperated with us panpsychists, who claim, on the basis of our first-person intuition, that the most successful, precisely quantified theory in the history of humankind must be wrong. He stresses that our theories of fundamental physics (the “core theory”) may have some soft spots, or uncertainties around the edges, but those limits are way, way, way outside the realm of anything that happens in a human brain. The claim that the core theory is wrong at normal energy levels, in what we consider a normal gravitational field, while not logically ruled out, must clear an almost laughably high bar.

Point taken. Can we have a robustly causal panpsychism that does not so much contradict the well-established results of quantum mechanics as supplement them?

It certainly seems that the laws of quantum mechanics are true, and dead-on accurate. The loophole in the causal closure argument may be that, while perfectly accurate, the laws of quantum mechanics only yield probabilities from an empirical point of view. They specify a distribution curve, not precise predictions. They predict collective behavior with 100% accuracy, but are agnostic about individual behavior.

If you run a quantum experiment 10,000 times, you are assured that your outcomes will fit this distribution curve exactly, and that for any one trial, the probability of one outcome over another is determined by the curve, but quantum mechanics is famously unable to tell the specific outcome of a particular single trial. It is an inherently indeterministic theory. Moreover, it is generally accepted that this indeterminacy is not a flaw in the theory or evidence of its incompleteness, but a fundamental feature of physical reality itself. No matter how well you know an electron’s initial conditions, once it is in flight, you cannot predict its position before you measure it. This is not because of any practical limitation on our ability to characterize the initial conditions of the electron, or any inaccuracy in the theory, but because the electron cannot properly be said to have any definite position before you measure it. The position of the electron before you measure it is literally unknowable. It has only a likelihood of being in one place, and a different likelihood of being in another place. So the best theory we have about how the physical world behaves, and most interpretations of that theory, are, when it comes right down to it, indeterministic about the precise behavior of the physical world at a low level.

The only possible exception to this is the possibility that there are some kind of as yet undiscovered “hidden variables” at work, and, once discovered, they will allow us to predict the electron’s position once more with Newtonian accuracy. Albert “God does not play dice” Einstein spent a great deal of his later life looking in vain for a hidden variable theory. Few people seriously entertain the possibility of hidden variable theories today. Such theories are regarded as a philosophically (rather than scientifically) motivated attempt to restore determinism to the physical world.

“Random” Is a Big Tent

People mean different things at different times by the word “random,” but mathematicians have a pretty specific thing in mind when they use the term. You may have a hat containing ten cards, each with a different numeral written on it, from 0 to 9. You may draw cards from the hat, writing down the numeral drawn each time, and then putting the card back for the next draw. It is quite unlikely, but perfectly possible, for you to draw 0123456789, or 111333555777. You could say, colloquially, that these resulted from fair “random” drawings of cards from the hat, and that therefore the resulting sequences are random. A mathematician, speaking technically, would tell you that no, those weren’t random at all. The digits do not display a random distribution.

In contrast, the first digits of π are 314159265358979. Colloquially, you might say that those digits aren’t random at all, since they are carved in stone, written into the fabric of the universe. But the mathematician would say, no, those are random, even though they are calculable, and derived deterministically.

When the equations of quantum mechanics say something is random, they are not making any ontological claims about how the numbers came to be; they are merely like our mathematician, observing that they display a certain conformance to a distribution curve. There are many different actual outcomes of a given set of trials of an experiment that would still perfectly fit a given distribution curve, and thus not violate any laws that were given strictly in terms of conformance to such a distribution curve. The statistical distribution of letters I type on a keyboard might be the same whether I am typing a sonnet, a recipe, or meaningless gibberish. A complex coherent pattern may have the same statistical distribution as random noise—indeed, any maximally dense information (i.e. maximally compressed information) is statistically equivalent to random noise by definition.

The door is open, at any rate, for patterns to result from the behavior of quantum systems whose coherence is not predicted by quantum theory, but which nevertheless does not violate the predictions that quantum theory does make. We can squeeze the causal efficacy demanded by non-epiphenomenal panpsychism through a loophole in the stochastic nature of quantum predictions without actually contradicting the established theory.

So—quantum mechanics allows for the existence of high-level entities that are causally efficacious, and whose behavior, while constrained by other entities, has an element that can only be called “random” by our best third-person physical theories.

Maybe consciousness occurs in bursts, in the collapse of quantum superpositions, as Hameroff and Penrose claim. Maybe some kind of large-scale quantum superposition is sustained in the warm, wet environment of our brains by using the tubulin cytoskeleton of our neurons as some kind of quantum insulator. Maybe not. Something like that, something crazy sounding, however, will turn out to be the case. I speculate that at some point in the future it will be discovered that the brain’s activity depends crucially upon quantum phenomena, which are amplified to the level of neurons firing.

Of course, the operative word here is speculate. It is worth noting that it is only under certain special types of circumstances that quantum systems can evolve in a state of entanglement or superposition without decohering or collapsing back to a classical state (leaving aside the philosophical thicket of the measurement problem). Under ordinary circumstances, we do not see quantum systems of any great scale (I avoid using the word “complexity” because it implies precisely the wrong thing, namely that a quantum system is made of parts, and that there may be fewer or more of those parts). So, like Hameroff, I suspect that we will eventually find structures in the brain that would support some reasonably large-scale quantum superposition which implies isolation from the surrounding environment.

But Back to NRT

Physical systems in states of quantum entanglement display holism that I claim is a non-negotiable necessity for instantiating consciousness. Further, I have speculated that as quantum mechanics contains the only currently known gap in the causal closure of the physical, the indeterminacies of quantum mechanics are, in fact, the fence around these natural individuals that modern science has built, with a sign that says, “Something funny is going on in here, and we can never know what.”

For the moment, however, let us set aside my suspicions about quantum mechanics. Perhaps my speculations about quantum mechanics are completely wrong. Perhaps consciousness is some kind of hitherto undiscovered field or force that is modulated or generated by neurons. Maybe Rosenberg (2004) is right, and consciousness is built into the mesh of causation itself. Moreover, no matter how this question is answered, the quantum superposition or force or field that is consciousness could be something that spans lots of neurons, as Hameroff and Penrose believe, or it could be something that happens inside a single neuron, as suggested by Jonathan Edwards (2006).

Whatever kind of physical phenomenon thoughts, percepts, or moments of consciousness turn out to be at the most fundamental level, neurons have evolved to generate or exploit this phenomenon in some way. But it must be these, (fields, forces, superpositions, collapses thereof, or whatever) that instantiate consciousness in the senses I am interested in for the purposes of this book: the redness of red, and the holistic unity of our thoughts and percepts. The missing physical link is something weird, and not the supporting neuronal infrastructure.

If the artificial neurons in the NRT thought experiment can also exploit or generate these things, then great—consciousness is preserved in the artificial brain. If not, not, and the NRT thought experiment fails. If the field or force or superposition, or physical blob of whatever kind that instantiates this holistic consciousness spans multiple neurons, it will not be something that can be carved up and characterized in terms of quantifiable inputs and outputs between neurons. In such a case the NRT hypothesis is untenable.

If, on the other hand, the stuff of consciousness (force, field, whatever) happens inside individual neurons, it could be that the artificial neurons will not be able to emulate natural neurons with an explicitly specified algorithm. In this case, the non-algorithmic stuff in the neuron guides the neuron’s behavior in non-algorithmic ways. Otherwise, if the stuff in the neuron is emulatable with an algorithm (the epiphenomenal case), the end result of NRT will be a zombie. All of its neuronal behaviors and motor outputs will be identical to those of a conscious mind, but it will not, in fact, be conscious, at least in the “what is it like to see red” sense.

Either way, whether whatever instantiates consciousness spans neurons or is somehow curled up inside an individual neuron and manifests itself causally only by influencing how and when the neurons fire, there is something weird going on, something physically weird. I am flatly claiming, in Patricia Churchland’s phrase, that there is “pixie dust in the synapses,” except that it’s even worse than that. In past chapters, I have emphasized the qualitative aspects of the pixie dust, which implicitly leaves open the possibility of a more benign, conservative panpsychism: there is weirdness, but the weirdness might be confined to the crumbs of physical reality, and everything scales up according to the usual rules of causal dynamics that describe how reality stacks neatly. Here I am going a bit further in my claims about the pixie stuff. It’s not just dust. Maybe pixie clumps, or pixie blobs. Moreover, as with the redness of red, for the same reasons, epiphenomenalism is false: the pixie blobs influence (or even constitute) our cognition, and have macroscopic causal effects on the world. The pixie blobs do stuff.

This entails taking realism about consciousness to a new level. There is more to the mystery of phenomenal consciousness than accounting for some kind of qualitative paint that coats the otherwise coldly cognitive objects and data structures in our minds. Qualia are not just unstructured sensory qualities, but the objects themselves as well. To the extent that this essential objecthood is perceptible to us, and figures into our cognitive lives (i.e. to the extent that epiphenomenalism is just as false with regard to this wholeness-quale as it is with regard to the redness of red), this must go all the way down. The way we think of physics must accommodate it.


Time Consciousness and the Specious Present

…and I spread it out broader and clearer, and at last it gets almost finished in my head, even when it is a long piece, so that I can see the whole of it at a single glance in my mind, as if it were a beautiful painting of a handsome human being; in which way I do not hear it in my imagination at all as a succession—the way it must come later—but all at once, as it were.
—Mozart, on a piece of music, via William James

We all hear music the way Mozart describes, although usually for much shorter riffs than entire symphonies. I have argued that the all-at-onceness of our thoughts and perceptions is at least as inexplicable (and just as qualitative) as what it is like to see red. Examples of temporal all-at-onceness, are, I think, every bit as compelling as the visual/spatial all-at-onceness of the curl of smoke in an art nouveau poster. This line of thought will eventually force us to ask questions about time itself and how much we really know about it.

My Notion of Motion

The temporal aspects of consciousness can be illustrated visually too, of course. Imagine seeing dust motes swirl around in the air in the bright sunlight coming through a window, or someone riding a bicycle past you on a street. When you see these things, you see them in motion. That is, your consciousness is of objects in motion, just as directly and absolutely as your consciousness of a red tomato really is of redness. There may be all sorts of neurobiological and cognitive tricks going on behind the scenes, so to speak, but my actual subjectively experienced moment of consciousness is not instantaneous—it has temporality built in. It is, as Horgan and Tienson (2002) say, temporally thick.

The motion of something we see moving is not something we infer or conclude or extrapolate, but something we see, right there in the perception, just as much as shape and color. Our conception of time is not, like the weird laws of quantum mechanics, some counterintuitive scientific theory that our mathematics drove us to accept, but that we will never quite feel in our guts. We do feel time in our guts. A given moment of consciousness does not exist as a snapshot taken at a particular instant, or even a series of such snapshots from which we intellectually infer continuous change. As William James (1952) said:

…between the mind’s own changes being successive, and knowing their own succession, lies as broad a chasm as between the object and subject of any case of cognition in the world. A succession of feelings, in and of itself, is not a feeling of succession. And since, to our successive feelings, a feeling of their own succession is added, that must be treated as an additional fact requiring its own special elucidation… [emphasis original]

Or, as D. C. Williams put it (1951), “…we are immediately and poignantly involved in the jerk and whoosh of process, the felt flow of one moment into the next.”

For perception of motion to exist at all, it must be what it is, in its entirety, over a non-zero period of time. Whatever a moment of consciousness is, if you cut a piece off temporally, it just won’t be the same moment of consciousness. You cannot be conscious of a piece of music, even a short advertising jingle, without having it temporally in your mind’s ear as one undivided thing. As Dainton (2000, p. 127) asks, is a strictly durationless auditory experience even possible? Even of something like a single click? For a sound of any kind to be what it is to you, there always has to be an attack and decay of some duration.

There is a spooky way in which consciousness spans time. It is not what it is at a given instant, the way a hammer is, but can only be what it is smeared out temporally. That is, one can imagine a hammer winking into existence for an infinitesimal period of time, then winking out again, and for that instant, it would have been a complete hammer. But my percept of Marilyn Monroe breathily singing “Happy birthday, Mister President” simply takes time. It is a single percept, but it would not be what it is if it were just an instantaneous slice of that experience.

James commented on this also, and used (but did not coin) the term “the specious present” to describe the illusion that the present is an instantaneous point. As he said (James, 1952):

In short, the practically cognized present is no knife-edge, but a saddle-back, with a certain breadth of its own on which we sit perched, and from which we look in two directions into time. The unit of composition of our perception of time is a duration, with a bow and a stern, as it were—a rearward—and a forward-looking end. It is only as parts of this duration-block that the relation of succession of one end to the other is perceived. We do not first feel one end and then feel the other after it, and from the perception of the succession infer an interval of time as a whole, with its two ends embedded in it. The experience is from the outset a synthetic datum, not a simple one; and to sensible perception its elements are inseparable, although attention looking back may easily decompose the experience, and distinguish its beginning from its end [emphasis original].

Given this immediate, undeniable temporality built into so many of our perceptions, the big question is to what extent does this have metaphysical implications? Put another way, can we account for the subjective experience, the phenomenology of the situation, without making extravagant claims about the nature of the universe?


On the one hand, it could be the case that the infinitesimal point that we usually think of as being “now” is an abstraction foisted on us as a byproduct of calculus, and is not real. There may well be no precise point of “present” that divides “past” from “future,” and William James’s saddleback present is not just a phenomenological or psychological fact, but an objective truth of the real world. In this case, our consciousness just directly perceives a temporally smeared-out reality. Let’s call this position the temporal realist position: time really is smeared out just the way it seems to us, and we simply perceive it directly that way. The part of this position that pertains to experience only is sometimes called the extentionalist position, since it posits that experience itself is extended through time.

Many philosophers of time and consciousness would not agree, but I believe that extentionalism is metaphysically strange. I have been using the term all-at-once to describe a certain holism in our thoughts and percepts, but in the context of the present discussion this is exactly not what I want to convey. All-at-once suggests a simultaneity, an instantaneousness, that is exactly what extentionalism throws out the window. At the same time (har!), I want to preserve the sense of holism at the core of what I meant all along by all-at-once. I perceive things, the length and breadth of them, the beginnings and the ends, and I perceive them as one, at once, if not at one instant. We are all, on a smaller scale, like God as Boethius conceived of Him, taking in all of existence, past and future, in one massive totum simul. In time, no less than in space, we perceive non-zero things as entireties. I perceive, and then I may, as a further effort, have a subsequent percept of the original percept as being made of parts, but my percepts are not therefore made of parts. A macro-percept is not just an aggregate or composite of micro-experiences.

If this temporal holism is true, it is weird. For this wholeness of perception to take place throughout a non-zero length of time, it would seem to be the case that I can see forward in time, or perhaps backward in time, or both. My mind is at once in immediate touch with the bow and the stern of a percept, actually reaching through time to them and not just potentially or functionally connected to some memory trace of them. My experience of a car horn honking would not be what it is at any point in its timeline if any part of it were missing or different. If extentionalism is true, then I actually touch the past and/or future directly with my mind. As I said, weird.


Must there be such a tight correspondence between time as we experience it qualitatively and “real” time? Am I so sure that our immediate perception of time (stem to stern) must have actual, physical time embedded in it somehow, or must somehow actually span time? Do we need time to represent time, or is this just a failure of imagination on my part? Rather, could it be the case that we represent time to ourselves using something other than time itself?

Maybe there really is an objective, infinitesimal point of “now,” and our minds somehow buffer information from successive moments. As each moment of consciousness happens, it could include this buffered residue from recently passed moments smeared out in the appropriate way. Husserl used the term retention to describe this. Moments just passed are preserved not in long-term memory, but in a retention that is given whole to consciousness all at once. Let us call this position retention theory. Retention theory helps somewhat to overcome the sticky metaphysical problems with extentionalism and addresses the concerns nicely summarized in this quote by Thomas Reid, which I swiped from Ian Phillips:

[I]f we speak strictly and philosophically…no kind of succession can be an object either of the senses, or of consciousness; because the operations of both are confined to the present point of time, and there can be no succession in a point of time; and on that account the motion of a body, which is a successive change of place, could not be observed by the sense alone without the aid of memory.

It is a real stretch to think in terms of actually smeared-out experiences that are nevertheless perceived as one thing, given whole to consciousness. If perception happens at a point in time, then as Reid says, we must employ some kind of retention to perceive succession.

Computers can do a remarkably good job analyzing data (like sound waves) over time without any suggestion of metaphysical strangeness going on. They employ a sort of retention. They map the waveform to data structures, then perform their analysis on the data structures. I’d rather not go into the computer consciousness debate again here, but my argument against a computer having time consciousness would be similar to my arguments against it having any phenomenal consciousness at all.

Briefly, we have no reason to believe that a computer perceives duration the way we do. Rather, it computes itself into a particular state (in the technical sense, in which the computer is seen to implement a Finite State Machine). This state is manifested by a particular (possibly quite long) integer. By virtue of being in this state, the computer has a predilection to produce certain outputs that we might interpret as meaning that the computer has “perceived” the waveform, but at any instant during its analysis, the computer was just in a particular state, looking at a tiny crumb of data, and transitioning to another state as a result.

Some have argued against various construals of retention theory on the basis that it predicts results that we simply do not observe (Thompson, 1990; Kelly, 2005). In particular, if my seeing the long arc of a baseball after it as been hit were due to my retention of each successive position of the ball, and superimposing these retentions on my current moment of consciousness, I would see not a ball in motion, but a static arch, perhaps with the longer-ago ball images growing fainter, so that the overall impression would be that of a comet with a parabolic tail. Likewise, if I heard a song according to retention theory, I would hear a cacophony—a simultaneous clash of notes, or at best a chord.


I think that these objections are imaginatively constricted and do not give retention theory a fair hearing. Why should we insist on projecting our presumed time-sense onto our instantaneous space of (visual and aural) perceptions in this way? Leaving metaphysics aside for a moment and just speaking descriptively about our experiences, how can we think about qualia in a specific, quantitative way? How many qualia are there? We know we’ve got the five senses, and within each of those, there are a bunch of variations. For example: hue, saturation, and brightness for each pixel of the visual field, sweet, salty, sour, and bitter for taste, and a whole lot of different things we can feel on our skin (tickles, itches, pain, temperature, etc.). I have made the case that these primitive, unstructured sensory qualia only scratch the surface of the different kinds of qualia in our consciousness, and that a whole lot of larger, more complicated things must also count as qualia.

My intention here is not to get into the no-man’s land between qualitative consciousness and cognition (although I will later) but to argue that the range and varieties of qualia are huge. When we see the baseball in the air, we see it smeared out, but not in such a way that the smearing-out takes place within the instantaneous visual field; instead maybe it’s along an entirely different quale, a time quale. Similarly, the notes of a song are imbued with this quale as well, and not all jumbled into an instantaneous aural experience as a chord. What is it like to experience duration? It just is what it is, unique and irreducible to any other qualia, and a given conscious experience can have a temporal aspect along with, say, visual, aural, and emotional aspects without any of these aspects clashing or having to be mapped to another. Maybe the combination of this time-quale with the others is no more mysterious than the usual binding problem of how we combine different sense modalities in our percepts.

This construal of retention theory does not necessitate anything like a late-night comedy five-second delay: each individual moment of consciousness would be experienced as quickly as it could along with a continuum of just-past moments of consciousness, and would then itself also be retained along a sort of temporal axis, to be similarly subsumed by subsequent moments of consciousness. After some time, the longer-ago moments fade out completely. What we perceive as the immediate undeniable passage of time, directly perceived—that is, the time we experience whenever we see anything moving or hear just about anything at all—is, in a sense, an illusion created within the mind. We don’t actually, directly experience time as it exists “out there”, but instead somehow map “real” time onto something we conjured internally. Just as redness is only how we happen to paint our experiences of certain wavelengths of light, and is only arbitrarily associated with that range of wavelengths, this time-quale merely represents actual time, and is only arbitrarily associated with it. The real nature of actual time is then as imponderable as the “real” color of the photons we see as red. Let’s call this position the time-quale position.

Time to Bite Another Metaphysical Bullet?

On the face of it, it seems that the retentionalist/time-quale position is much easier to swallow than the extentionalist/temporal realist position. Why make a (somewhat outlandish) metaphysical claim when you can make a merely psychological one instead? Certainly adding just one more kind of quale to the range of qualia in our heads doesn’t seem much flakier than the claims I have already made about qualia and their implications for nature, but positing objectively smeared-out presents, and direct perception of them, is an entirely different matter.

Ultimately, however, while the extentionalist position implies some metaphysics that are hard to swallow, the retentionalist position is impossible. I once again appeal to direct subjective experience. The retention theory/time-quale position entails a distinction between time as perceived (the time-quale) and something else, which I will call scientific, or actual time. For a retentionalist, time actually passes in the real world, in scientific time, and throughout time, different sensory impressions are made upon the mind and buffered there as they are experienced. These impressions are tagged with a timestamp in some way and ordered appropriately. When the buffered information is presented consciously as part of the just-past of a subsequent moment of consciousness, it is strung together and presented as one thing, imbued with the time-quale. Each of the seemingly smeared-out moments we have ever experienced, then, has actually been perceived in an instantaneous flash, and the smeared-outness through time that we think we are perceiving is really another quale, like a new color.

This seems plausible enough at first, but it is a very fine line to hold. That which is mysterious about time, that which seems unlikely to be captured in an instantaneous percept, is not just some collection of facts about scientific time distilled from some formulas, but right there in our immediate temporal percepts. By positing a distinction between scientific time and perceived time, we were trying to let the mind have its temporally smeared-out percepts, but in a way that is metaphysically “safe.” The aspects of time that make it metaphysically inconvenient to give directly to consciousness are to be cordoned off in the realm of scientific time, while the mind plays with its instantaneous time-quale, and getting its timestamped retentions in order.

But now we have to ask ourselves if we can get away with this maneuver. Can we separate our sense of duration from scientific, actual time in this way? How much of what we know about time is already built in, inextricably, to our intuitive sense of duration? When we speak of our sense of a non-zero duration being contained in a zero-length instant of “actual” time, to what extent is this the same as (nonsensically) speaking of a non-zero amount of time being contained in a zero-length amount of time? As David L. Thompson said (1990):

…if all our ideas are based on experience, then of course the notion of objective time, as we understand it, (and what else can we speak about?) must be based on experience. The objective notions of scientific time, and any philosophical concepts based on these, must be constituted out of our original experience of internal time [emphasis original].

Can everything we experience about time just be the paint we apply to sequences of timestamped retentions, the way red is the way we paint certain wavelengths of light? If so, then time presents no problems for us beyond the familiar problems with all qualia, and retention/time-quale theory is plausible. If not, we are forced to the metaphysical strangeness of extentionalism and, once again, we are making a grand claim about the fundamental levels of reality based on a subjective intuition.

To what extent does retention/time-quale theory let the fox into the henhouse? Even if there is a radical mismatch between external time and our subjective experience of time, this may not help if the problems about time consciousness are inherent parts of that experience. Several of the quotes above can be summarized as saying that while my red experiences are not themselves red, my experiences of time are, and must be, temporal.

Are we sure? Or is it conceivable, even in the abstract, for external time to be a completely different animal than experienced time? When I consider, for example, my notion of motion, for the arc of a baseball to be contained in its entirety in a single instant of “actual” time would mean that “actual” time would have very little in common with any conception of time that I understand. The mysterious essence of time, that which makes it inconceivable to compress into a timeless flash, may already be there in the subjective experience of time.

Moreover, as St. Augustine said in Confessions XI (thanks to Natalja Deng):

If any fraction of time be conceived that cannot now be divided even into the most minute momentary point, this alone is what we may call time present. But this flies so rapidly from future to past that it cannot be extended by any delay. For if it is extended, it is then divided into past and future. But the present has no extension whatever.

People who believe the version of retentionalism that holds that all perception is instantaneous (the “presentists”) are failing, I think, to appreciate just how short a time 0.0000… seconds is. There is no action in zero seconds, no activity whatsoever; certainly no neurological activity. I claim that there is no phenomenological activity either.

If you think of a four-dimensional block universe, could there be consciousness in a perfectly “flat,” durationless 3D slice of it? If such a timeslice winked into existence for an infinitely short time and winked out again, and that were the only universe that ever existed, could there be consciousness in it? I think not, and even if you think the answer is yes, then I think the metaphysics of that “yes” are at least as problematic as the metaphysics of the extentionalist position—you are claiming a consciousness that floats completely free of any physical process. If the retentionalist claims that zero does not really mean zero, but just some pretty short time, then the genie is out of the bottle: time itself is necessary to perceive time, and you might as well call yourself an extentionalist.

As to the reluctance to bite a metaphysical bullet when we might be able to get away with biting a psychological one instead, I have already argued that we have to bite a metaphysical bullet anyway to see anything all-at-once, even a stick lying on the ground. Extending this into the realm of the temporal as well as the spatial and conceptual is not much more outrageous, and may actually clear up some of the mysteries surrounding time that have nothing to do with consciousness.

There is a lot that science does not understand about time, and consequently is silent about. Science generally treats the universe as a four-dimensional block, with the Big Bang at one temporal end. Leaving aside some wrinkles involving relativity, science speaks of points in time, just as it does points in space, and these points can be thought of as three-dimensional cross sections or slices taken out of this 4D block universe. But nowhere in science, certainly not in physics, is there any mention whatsoever of a constantly moving privileged point or timeslice called “the present.”

What makes now now? Is it just a psychological trick? My point here is that the hard sciences are superb at describing the things they do describe, but there is a great deal of room in the places where they are silent for conjecture about what is really going on. Speculations about real live smeared-out presents, and different presents of different durations for different consciousnesses, do not so much contradict any scientific facts as they try to fill in some of those gaps.

If a thought or percept is temporally thick, what exactly does this mean? I have a strong intuition that my percepts exist through time, that I have direct experiential contact with something that spans a non-zero amount of time. Does this mean I see the future? Not really. That would imply that I have an experience at one point in time of something that takes place at another point in time. It does not really make sense to speak of any experiences at a point in time. They don’t come in points.

I do not know how experiences are individuated, or if there ever will be any hard and fast criteria for individuating them. But part of the point of calling my qualitative subjective experience qualitative is the claim that however an experience may or may not be individuated as you scale up, you certainly cannot subdivide it by scaling down. Experiences tend to fuzz out around the edges, and it may be hard to tell exactly where their outer boundaries are, but I am certain that somewhere within those fuzzy boundaries, an experience must be what it is in its entirety, as a whole, not as a function of any “parts.” What I am now suggesting is that this indivisible, all-at-once whole exists as it does over time, in addition to whatever other sense in which it might exist.

I have already argued that qualia give us a counterexample to the reductionism implied by the physicalist worldview. However accurate the hard sciences are at describing the phenomena they describe, there are qualitative essences that they are silent about. However much we may be deceived about the exact colors we see, or however much certain kinds of illusions may trick us, as long as there is even a tiny crumb of any kind of color, sound, taste, etc., at all, it’s game over. We have our counterexample. If you are the kind of person who likes to think about things like this, this counterexample should trouble and/or fascinate you.

In similar fashion, I hope that I have already made the case that the mid-level unity of our percepts serves as a different kind of counterexample to the reductionism implied by the physicalist worldview. Similarly, however we may quibble about the boundaries of the thinghood of our percepts, as long as there is even a tiny crumb of unity there, we have our counterexample. This should also trouble and/or fascinate us in a somewhat different way than the redness of red, although they are related.

Finally, in this chapter I have made the case that our perception of time has similarly metaphysical implications. Our direct time perception can’t be dismissed as an illusion in any sense of the term that bails us out of these implications. As with the other two, if even a tiny crumb of our perception of time actually incorporates time itself, we have to do some big thinking about how time works.


Free Will

Qualophiles are often accused by physicalists of trying to sneak God in the back door, or some watered-down version of God, like the soul, or just some notion of the inherent specialness of human beings. While most antiphysicalists do not harbor such hidden agendas, they are sensitive enough to the accusation that they sometimes wrongly neglect branches of inquiry that might seem to lend circumstantial weight to it. One such branch of inquiry is the issue of free will.

The question of free will is one of philosophy’s most frequently asked questions. I once believed that either the question was incoherent or the answer was no. People do have some powerful intuitions about free will, though, and it is worth trying to clarify and articulate those intuitions if for no other reason than that the question keeps coming up again and again over the millennia.

More to the point of this book, I claim that all the aspects of consciousness that are metaphysically mysterious are also causally efficacious: as I’ve argued, epiphenomenalism is false. All the qualia, and all the strange aspects of them, do things. Qualia influence my cognitive life. If qualia get to push physical things around at all, even within my own brain, do I get to push things around? Surely these two questions have something to do with each other. I haven’t said much about that I yet, but is there even wiggle room for anything we would intuitively accept as free will in the universe, assuming we think at this point that the existence of qualia tells us some interesting things about how that universe is put together?

First, we have to figure out what version of free will we are talking about. In philosophical debates, people generally fall into one of three categories when it comes to the question of free will. First, there is free will eliminativism (there is just no such thing as free will). Then there is compatibilism, which says that while, from a scientific point of view, we are effectively deterministic machines, this still allows for any notion of free will worth having. Finally, we have free will libertarianism, which is the whole-hog belief in Free Will (capital F, capital W). Not many people these days admit to being libertarians.

Two Cheers for Compatibilism

No matter what metaphysical commitments you have, you believe in free will. Not in any grand fundamental sense, but in an everyday sense. Ever been on a diet? Ever looked at the apple you brought as an afternoon snack, but couldn’t help thinking about the Snickers bars in the vending machine down the hall in the break room? You know what I’m talking about. The question, then, is what mechanisms implement this.

I am actually pretty sympathetic to compatibilism. No one denies that the mind is very complex, and that there are a good many levels of functional organization between any putatively deterministic molecules bonking around in my neurons and my feeding a sweat-soaked, wrinkled dollar bill into the candy vending machine. If the mundane free will that we experience in the break room turns out to be implemented by a deterministic substrate, way, way down, it would be hubristic to be bothered by this. Who among us can claim to have such a tightly integrated picture of reality across so many levels of organization that it matters to them that their decision to get a second Snickers bar half an hour later and leave the apple to rot is manifested by zillions of deterministic atoms rather than non-deterministic atoms? Given our ignorance of all the connections involved, and the practical impossibility of reverse-engineering you and predicting your actions, if you claim that your sense of will, your sense of yourself, your sense of justice, personal responsibility, etc., are upset by the purported determinism of your atomic substrate, I think you are lying to yourself.

In general, however, the debates around free will concern the full-bore libertarian kind. This is the kind of free will that is philosophically interesting, as opposed to (or in addition to) being psychologically interesting, so hereafter that is what I mean when I speak of free will.

What Even Is Free Will?

As philosophers, we are free to define terms like “free will” any way we choose, but if we stray too far from common usage, our speculations become a purely technical exercise. In order to speak even generally about free will, we should try to answer some questions about how normal people use the term. Is free will a quaint human vanity? Can we frame the notion of free will in such a way that it is even coherent yet still respects our rough intuitions? What would a mind have to be like for it to have free will, and how would it work? What kinds of natural laws would there have to be in a universe for us to be able to say that that universe allowed for any intuitively satisfying notion of free will? Is our universe such a universe? If we philosophers get this wrong, will our justice system crumble, causing society to collapse into barbarism? (On this last question, I am confident that no one—absolutely no one—cares one whit what philosophers think. Do not worry about the social implications of your metaphysics, especially since you can never know how society would interpret it anyway, even if they were to accept it as true.)

We all have some ideas about free will and have probably read about it, but before I get into philosophical speculations, I’d like to highlight some of my own off-the-cuff pretheoretical intuitions. There are certain aspects of free will that I think are baked into our common understanding of the term, and deserve to be highlighted.

Thwarted Free Will Is Still Willful

Free will is often thought of in terms of action, in terms of how I might impose myself upon the world. While this is part of our common intuition about free will, if we are talking about true libertarian free will, and thus looking for some special phenomenon in the universe that underwrites our intuitions, the practical ability to affect the outside world is not necessarily an ingredient. Free will, if it exists at all in this strong, fundamental sense, is an aspect of consciousness, and exists in the proverbial brain in a vat.

That is, if we decide ultimately that full-bore libertarian free will is real, it will be something that I possess even if I am lying completely paralyzed in a hospital bed, as long as my conscious mind is functioning. The kernel of will exists, if it does at all, independent of any ability to impose it upon the world. Lying in the bed, I can allow myself to wallow in self-pity, rage, and despair, or I can decide to spend my time calculating sequences of prime numbers, or I can try to truly forgive everyone who ever wronged me and attain a state of perfect internal peace. These constitute willful decisions, and they are no less willful if I die without ever having recovered the ability to act outwardly upon them, even to the extent of telling anyone else about them.

How Invasive Is the World?

Doesn’t the outside world, through physical causation, play you like a piano? To the extent that you are aware of it, the world has pressed itself upon you, it has forced you to conform to it. How do we strike a balance between being aware of external things, and creatively deciding what to do about them?

A willful agent would have to incorporate external information into itself as part of its field of perception, but could stand back from it as it were, and regard it. Central to the intuitions we have about free will is the claim that an agent gets to survey reality, then decide what it wants to do about it. It gets to be an unmoved mover, an uncaused cause (at least in terms of the self-creation entailed by its acts of creatively conjuring its internal reality). It gets to be aware of things without that awareness constituting an algorithm that it must execute. Free will is built into the concept of descriptive information (as opposed to prescriptive information) and vice versa. I will have a bit more to say about this distinction later.

Free Will Is Inherently Creative

Free will is often characterized in terms of selection among a limited set of options: choose one entree from column A and a side dish from column B. While will sometimes manifests itself as a selection like that, the force behind that selection is an exercise of creative visualization or conceptualization. We envision different outcomes, different futures, different selves, and therein lies the will, even for something as mundane as ordering Chinese food. The fact that the outside world, and our perception and acceptance of the constraints it places on us, shunt that into a limited number of specific choices should not beguile us into thinking of that (eeny-meeny-miny-moe) as the paradigmatic example of free will.

It is creative will that leads an artist to paint a particular painting in a particular way. Most of us have had experiences of this kind at one time or another—being in that creative groove is an essentially willful state of mind. Will is creative in an unbounded, open-ended kind of way. When an ancient ruler decides that when he dies, a man-made mountain should rise from the desert to be his tomb, and that tens of thousands of slaves should work for decades to make that happen, that is a monumentally willful act. Will is about creating the options in the first place as much as it is about choosing among them.

Free Will Is Constitutive of Self and Not Necessarily Non-Deterministic

People often say that we do not have free will if our actions are rigidly determined by the behavior of the parts of which we are made. If all the little parts are just doing what they must according to the laws of physics, there is no way the whole could be doing something above and beyond the sum of the parts—the whole just is the sum of the parts. And if the whole somehow had this thing called free will, and this free will had any causal efficacy whatsoever (like the ability to move my arms or legs, or to make my fingers type), it would be a ghost in the machine: somewhere in my body there is at least one molecule that, under the influence of this purported free will, does something different than whatever it would do if it were not under the influence of this free will. That is, if the molecule (or cell, or muscle fiber) were acting only in accord with the physics that govern such things, it would behave one way, but under the influence of free will, it behaves another way. This would seem to imply that free will (of the whole-influencing-the-parts variety) necessarily violates the laws of physics. But no scientist anywhere has seen any violations of the laws of physics at work in the human body or brain.

Free will is most often contrasted with determinism, but this strikes me as something of a false dichotomy, even for a hard-core free will libertarian. Depending on what we end up deciding free will is, whether or not determinism precludes free will, indeterminism does not save it. Famously, quantum mechanics, the most successful scientific theory ever, is non-deterministic, at least the part where the rubber meets the road in terms of empirical measurements. That is, quantum mechanics predicts outcomes of experiments within a statistical range, but there is always a random factor in the prediction of a particular single trial of an experiment. Moreover, this indeterminacy is generally believed not to be a fault of the theory, gaps to be filled in by future scientists, but a fundamental feature of physical reality.

Some people look hopefully to this indeterminacy of quantum mechanics to give free will a toehold in the natural world. There may be something to this, but it is not quantum mechanics’ indeterminacy alone that does the trick. If I am made of my parts, if I just am my parts, then I am in the thrall of their functioning, whether those parts function according to deterministic Newtonian physical principles or indeterministic quantum ones. According to my intuitions of what is meant by free will, it buys me no more free will to believe that somewhere in my brain, my decisions are being made by some electron jumping or not jumping to a higher energy orbit within a certain time (no matter how unpredictable beforehand) than to believe that my entire mind functions predictably, like a clock.

Moreover, while indeterminism does not by itself save free will, I do not believe that determinism by itself necessarily dooms it. If you made 1000 atom-by-atom copies of me, and each one of them acted in exactly the same way when put in the same situation, it is arguable that it would not necessarily threaten any sense of free will that I may have. My decisions may be freely made, even if I would make the same ones in the same circumstances every time. This may seem initially counterintuitive, but at least according to my personal sense of the term, free will does not necessarily mean that I have some random X-factor driving my decisions.

Some of the most willful decisions we make seem somehow inevitable. Daniel Dennett cites Martin Luther, who, upon taking the possibly suicidal (or worse) stance of denouncing some of the practices of the Catholic Church, said, “Here I stand, I can do no other.” Luther’s actions were a deep expression of his character. He could not be the person he was and act otherwise. Given who he was, he was bound to do what he did, yet his was a quintessentially willful act. When you exercise your free will, you are not merely deciding what to do, you are deciding what to be. You creatively envision a future, and a future self, and then you instantiate that future.

This sort of willful determinism is also described quite well by C. S. Lewis (1955) as he recounts the defining moment in his life in which he abandoned his youthful atheism:

I felt myself being, there and then, given a free choice. I could open the door or keep it shut; I could unbuckle the armor or keep it on. Neither choice was presented as a duty; no threat or promise was attached to either; though I knew that to open the door or to take off the corslet meant the incalculable. The choice appeared to be momentous but it was also strangely unemotional. I was moved by no desires or fears. In a sense I was not moved by anything. I chose to open, to unbuckle, to loosen the rein. I say, “I chose,” yet it did not really seem possible to do the opposite. On the other hand, I was aware of no motives. You could argue that I was not a free agent, but I am more inclined to think that this came nearer to being a perfectly free act than most that I have ever done. Necessity may not be the opposite of freedom, and perhaps a man is most free when, instead of producing motives, he could only say, “I am what I do.”

We define ourselves by our choices. We drag our future selves into existence through our will. William James (1952, p. 288) said, “The problem with the man is less what act he shall now choose to do, than what being he shall now resolve to become.”

I think that people feel that determinism threatens free will because it seems to imply that the mind could be accurately modeled by some other system, rendering the will moot. Free will can exist in a world in which the entities having free will act the same way in the same circumstances (i.e. they behave deterministically), but not in a universe in which you could predict that behavior. If my mind is a system that always behaves the same way when it is in state X and given input Y, then any system that could produce that behavior when given input Y in state X for all appropriate behaviors and X’s and Y’s would be able to second-guess all of my decisions with perfect accuracy. Yet such a system, being nothing but the functioning of its parts, would not be exhibiting free will. It would have no greater identity (none that was causally efficacious, anyway) above and beyond those micro-parts. If the system has no free will and it provably behaves exactly as I do, it certainly seems that any supposed free will that I possess doesn’t buy me much.

The threat to free will posed by determinism in such a scenario is not determinism per se, but the fact that it seems to imply that I could be modeled by a system whose behavior is transparently determined by the dynamics of its parts. The problem that determinism poses for free will, then, is that it implies a kind of fundamental, ontological reductionism. However we end up defining me, I may behave the way I do deterministically and still have free will, as long as it is not a reductive determinism, driven exclusively by the functioning of my parts. Conversely, if I am driven strictly by the functioning of my parts, then their being randomized in some way (e.g. with quantum coin-flips) does not save free will.

Free Will Is for Partless Wholes

Ultimately, the status of free will depends not so much on whether or not we live in a deterministic universe as on whether we live in a universe in which strong, ontological reductionism is true. Regardless of the particular laws that describe the low-level entities in any given universe, if all things in that universe are either those simple low-level entities or high-level things that are nothing more than aggregates of the low-level entities, and all the behaviors and properties of the high-level entities fall out as inevitable consequences of the behaviors and properties of the low-level entities, then free will (at least as something possessed by the high-level entities) is an incoherent concept.

The claim of free will ultimately depends upon there being some kind of holism at work in the universe. Specifically, for free will to exist in some agent, that agent must be an intrinsic, inherent individual (i.e. seeing it as one single thing is not just some way of looking at the pile of matter that constitutes it); that whatever nature’s principles of individuation are, it counts as one of nature’s individuals; that it is a partless whole.

Another way of saying this is that for free will to exist, some form of (very) strong emergence must be true. There may be more involved than this, but for there to be free will, this much at least must be true. For something to have free will, it must not be in the thrall of the functioning of my parts, no matter what the operating principles of those parts are, whether those parts function according to deterministic or indeterministic laws. Its actions and future state must depend on some holistic, indivisible, qualitative essence.

If the universe does, in fact, exhibit the required type of holism, the principle of parsimony of natural laws must be discarded—we are stuck with an extremely baroque picture of the natural world. In such a world, there would not just be a handful of fundamental things of which everything is made—photons, quarks, electrons, neutrinos, etc.—and a relatively small number of laws that describe the interactions of this handful of fundamental things. We would instead have a huge (infinite?) number of fundamental entities, these entities would be complicated, high-level seeming sorts of things, they might be transient, and each would have its own set of laws.

Do Large Partless Wholes Obey Laws?

This lack of parsimony does not make such a scenario inconsistent or obviously incorrect, however. Imagine that something with free will is an entity whose behavior springs from its own particular nature, such that it generates, manifests, and in fact is its own law, the law of nature that applies only to it. It is an entirely novel thing in the universe, like a new elementary particle. What it does from instant to instant is a surprise to everything else in the universe, including the universe itself. Its behavior, after the fact, could be considered a new law of nature, if one insisted on clinging to that terminology. Furthermore, once the moment is gone, its law will never apply to anything else. In this scenario, the terminology of “things” “obeying” “laws” breaks down and becomes meaningless. If I act the way I do because of the inherent nature of the thing that I am, and what I am will never be repeated, one could say I obey my own custom-made law of nature, of which I am the only instantiation at a particular moment. Or one could not.

Does Anything Obey Laws?

This is really just the degenerate case of any law of nature, in that all such laws are inductively derived. There is a sort of Platonism hiding in the concept of a law of nature. In real life there is no such thing. No electron in the history of the universe has ever obeyed a law, any more than they collect vintage cars or vote in presidential elections. Balls on ramps and electrons do what they do not because of some law that they all know about, but because that’s what they do. Each electron, without reference to any other electron, and without reference to the way it is supposed to behave, acts like an electron. Each one has somehow memorized, or “knows,” its patterns of behavior. Its behavior is built into each electron individually. The law, such as it is, must be written into the hardwiring of each electron, copied a hundred zillion zillion times over, for as many electrons as there are in the universe. No one is obeying anything. As it turns out, all electrons behave pretty much the same way (for unknown reasons), so we write down a general characterization of that behavior and call it a law, and from then on, we can speak as if all the electrons in the universe “obey the law.”

A law of physics is something we invented, an abstraction, a convenient fiction to help us track the behavior we observe after many trials. It’s really just a colorful idiom, a quirk of linguistic convention, but one that isn’t, strictly speaking, true. In certain kinds of discussions, like the one we are having here, it isn’t harmlessly wrong, either. The whole terminology of “laws of nature” or “laws of physics” strikes me as an Enlightenment-era metaphor with a bit of cultural baggage attached to it, one that we have accepted into our ways of talking and thinking. It reminds me of the Victorian Rudyard Kiplingesque statement that the lion is the “king of the beasts.” I can see why a someone of a certain era might phrase it that way, transposing a familiar hierarchical political order onto the natural world, in which no one preys upon a lion, but that’s not really the way ecosystems work. Calling a lion the king of the beasts, like calling electron behavior a law of nature, says more about the mindset of the speaker than it does about lions or electrons.

For almost all practical purposes, we can continue to speak and think as if laws of physics were real, and stuff obeys them. You can be a perfectly successful engineer, or even a particle physicist at CERN, and never have a problem with this. But for a philosopher thinking about why the universe works the way it does, and how it works at the lowest levels, if someone asks why the electron behaves the way it does, the best answer is the most humble one: the agnostic shrug. No one knows. We can describe it pretty well, but we have no idea why it does that.

How Do the Big Things Interact with the Small Things?

What about any “laws” that apply to unique, high-level individuals? If we only have one data point, and always will only have that one data point, it really becomes a matter of preference as to whether to call the behavior of such an individual a law or random behavior. Any unique one-off “laws” that apply to the high-level entities would necessarily be forever unknowable to any outside observer. If we were to look at such an entity from the outside, its behavior would have to appear to have a random factor in it. Any system of laws applying to a universe with such things in it would characterize the regularities of the simple, low-level things as well as it could, and simply throw up its hands when it ran into the behavior of the high-level entities, labeling it as “random.”

We would have a sort of dualism then, but it would be an epistemic dualism, not an ontological dualism. There would be only one universe with one kind of stuff in it, but there would be a division between that which we could characterize completely in third-person terms, and that which would be forever closed off to our laws and theories. In short, in such a picture of the world, given the characterizations (a) I act randomly, (b) I act out of free will, an expression of my inherent nature, or (c) I act deterministically, obeying my unique law, it is perfectly valid and consistent to say (d) all of the above.

In practical terms, if our world is really like this, we could not model my behavior with a machine, because the “laws” that determine my operation are unique to me at each instant (the “me” at each instant being different, each with its own law(s)), and undiscoverable without being me. And even if, by some chance, a machine could model my behavior perfectly for a time—say, ten hours—there would be, in principle, no way to be sure that it would continue to do so for even one second more.

Oddly, such a view is actually a form of physicalism, in that it posits a physical basis for consciousness and free will, although one that is quite different from that which most physicalists suppose is true. Even if there are these fundamental macro-entities with their own one-off laws, there are still the micro-entities like electrons and photons and their more generally applicable laws. Any claims we could make about the macro-entities and the ways in which they behave must not violate the more commonly known basic physical laws that describe the behavior of the micro-entities.

Given that, whenever we look at the world, all we see is the micro-entities, and their behavior seems pretty unmysteriously described by the physical laws that apply to such entities, is there any wiggle room for these purported unique macro-entities to do anything? Where are they hiding? This is another version of classic physicalist’s challenge regarding the causal closure of the physical world: the dynamics of the world and everything in it are completely nailed down once we nail down the dynamics of the low-level micro-stuff (the physics). However, this is not as true as it appears.

As it turns out, modern physics does characterize the behavior of the fundamental constituents of matter in the way I have said a free will-supporting universe would have to work: we know roughly how things will behave, but there is always an irreducible random factor. Quantum mechanics tells us that the outcomes of empirical tests we run on low-level entities are described by statistical laws only. The exact behavior of the micro-entities is thus not exhaustively and unmysteriously described by laws—there is an irreducible “random” factor. There is, therefore, some wiggle room for consciousness (or, if you prefer, qualitative macro physical entities) to be causally efficacious, to exert some extra influence on material things in the universe without violating any known laws. In effect, consciousness exhibiting free will would be a “hidden variable” in a correct physical theory, according to this hypothesis. Crucially, quantum mechanics also gives us examples in the real world of these indeterminate entities scaling upward from the level of the single subatomic particle.

Who (or What) Would Possess Free Will?

We already have examples of truly qualitative consciousness, and this consciousness can constitute a big, complex, indivisible whole. Moreover, this consciousness is efficacious. If you buy all of this, is there any room for there not to be a robust, libertarian free will in our world? I think it is pretty clear now that I don’t think so. But what can we say about how to individuate whatever it is that has this free will? We have a toehold, perhaps, but not much more. Saying that there may be a basis for something resembling our intuitive notion of free will in the universe falls a bit short of saying, “I, myself, possess free will.” Possibly, but maybe that’s all we get.

How Many? How Long-Lived?

Let us imagine that the consciousness that has free will is a short-term thing, more of a moment of consciousness than a constant, cradle-to-grave kind of consciousness (see Galen Strawson (1997) for a good article about why this is a plausible, and perhaps the most plausible, way to talk about the self, or his longer work (2009)). We should also be careful about any assumptions about the number of consciousnesses that comprise me at any moment, in addition to how many there are across time. It may turn out that “me” is made of a conglomeration of lots of consciousnesses or moments of consciousness. There could be a fundamental sense in which consciousness is real, and possesses free will, and nevertheless the persistent unified self could end up being a useful fiction, at least as we conceive it. I will come back to this in a little more depth later.


How Panpsychism Might Work

Particle man, particle man
Doing the things a particle can
What’s he like? It’s not important
Particle man

Is he a dot, or is he a speck?
When he’s underwater does he get wet?
Or does the water get him instead?
Nobody knows
Particle man
—They Might Be Giants

I and others have argued for the inherent incompleteness of sets of physical laws as descriptions of reality—such ladders of categorization of reality will always be missing the bottom rung. Moreover, we are confronted with a phenomenon, consciousness, that does not seem to have a natural home in the world that physics describes. I have also argued that so-called levels of organization buy us exactly nothing in terms of explaining consciousness: all “higher-level” aggregations or black boxes do for us is allow us to think of masses of low-level parts more effectively and conveniently, given our limitations. No explanatory power is given or taken away by thinking of the lower levels chunked up in one way or another.

We panpsychists tend to be a bit vague about the exact nature of the unimaginably simple spark or crumb of something qualitative that exists in an electron, and manifests or instantiates the outward electron behavior that our physics describes so accurately. This vagueness is appropriate! It’s something that somehow scales up to our human-scale consciousness, but might be intuitively far removed from it, just as a couple of hydrogen atoms and an oxygen atom don’t, at first blush, seem like a likely basis for an ocean. We just claim that there has to be some kind of foothold for such scaling, and that there is no such foothold in orthodox physicalism.

I like the way Luke Roelofs (2019) put it, as he mused about what it might be like to be a fundamental particle:

…maybe the basic motivation is not pleasure or displeasure but a sort of blind love and desire for union with the world that is inarticulately perceived, or a sort of “tension” that is more basic than either pleasure or displeasure. We are not in a good position to decide among these alternatives: positively determining the experiences of the fundamental physical entities is probably beyond current human ability, perhaps requiring a near-completed physics, a near-completed introspective phenomenology, and a near-completed neuroscience (maybe augmented with superhuman powers of reasoning, introspection, and imagination). My goal here is not to decide what microexperience is really like, but to show that there is room for it to be both simple enough to be compatible with our present physics and rich enough that, as long as combination is possible, it provides a sufficient explanatory basis for human experience.

The Combination Problem

as long as combination is possible”: there’s the rub. Let’s imagine for a moment that the panpsychists are right, and that some kind of crumb of proto-consciousness must exist down at the lowest levels of reality, along with mass, charge, and spin. (Although it is probably better to say that this crumb of proto-consciousness underlies or instantiates those other physical properties, or that it implements them willfully.) So how does panpsychism get around the famous combination problem? Even if, at the lowest levels of physics, quarks are conscious, and quark behavior is implemented by quark consciousness, each quark is still a billiard ball as far as the other quarks, etc., are concerned, blindly knocking into other particles, interacting only by causal, functional dynamics, and we are back in the world of physics.

Some critics of panpsychism seem to think that this is a show stopper. Given the causal closure of the physical world as described by our science, any quark consciousness is confined to the quark level, and any scaling or integration into larger entities can only happen by virtue of good old extrinsic functional dynamics. This makes panpsychism the worst of both worlds: you posit something crazy like quark consciousness, and it doesn’t even help you explain human consciousness! If the human-scale consciousness comes about by virtue of good old physical interaction, it would exist even if it were implemented by some other substrate than this purported quark consciousness, in which case the quark consciousness is epiphenomenal, and we know what William of Occam would say about that.

The combination problem is not quite the show stopper that it is made out to be, however. It is more of an unknown than a can’t-possibly-work. Somehow consciousness scales, but we don’t know nature’s scaling principles. What counts as a thing in this regard? Is a spoon conscious? A pile of sand? The air in this room? Does proximity of particles matter? These are open questions, and ones that a panpsychist does not necessarily have to answer just yet.

That said, this is an area in which even my fellow panpsychists get a bit hand-wavy. My sense is that they don’t want to bite the bullet. It’s one thing to come out as a panpsychist, but another to take that extra step (let’s not get too crazy here). So the bullet I chew on is this: yes, there must be (proto?) consciousness down at the fundamental levels of physical reality. In addition, consciousness, as such, must scale from the quark level to full-blown, cognitively rich, multi-modal human consciousness, in a way that is not [merely] described by the functional dynamics of causal bonking, networks of micro-parts obeying simple laws, however big and complicated those networks may be.

In fact, any big, human-scale consciousness must not be composed of micro-consciousness at all. It may be so composed in time, that is to say, causally composed by little consciousnesses (and may also be destined to fragment into little consciousnesses in the future), but it itself, as such, cannot be constitutively composed of little consciousnesses while it exists. Human-scale consciousness is, must be, what it is, indivisible. Note that even the formulation of the combination problem begs an important question. To ask how, say, tiny quark consciousness scales up to big human consciousness is to assume that human consciousness is made of quark consciousness the way a wall is made of bricks. It assumes that the tiny is fundamental, and the big is derived (that’s not a human mind, that’s just a bunch of proto-consciousnesses arranged in a human-mind-like fashion!). In some sense at least, the big consciousness must also be fundamental.

Finally, on top of all that, this big consciousness is causally efficacious.

I want to be explicit about this—I’m going up against the oft-cited causal closure of the physical world. I speculate that there are certain kinds of systems that allow for some kind of scaling: true, really-there, inherent integration or conglomeration. These systems might be rare, but natural selection, in all its creativity, has stumbled upon and exploited this principle in brains. But yes—we should expect to see some configuration of molecules actually do something because of this consciousness that is not predicted by normal physics (although perhaps in a way that does not exactly contradict normal physics).

Particle Man

Milla Jovovich as the Fifth Element I think it would be cool to write a comic book in which the protagonist is a superhero: six foot four, barrel-chested, cape, square jaw, steely gaze. He can do anything he wants, unbound by the laws of physics, because he is an elementary particle. A big one. If you get close to him, you see that he is not made of cells, which are in turn made of molecules. He is just one big indivisible thing. He is an example of what William Seager calls a “large simple,” or Galen Strawson (2009, p. 380) calls a “complex absolute unity.” You’ve got your electrons, you’ve got your quarks, you’ve got your photons, and you’ve got Particle Man.

There has never been a Particle Man before, and there never will be again, so there is no existing body of laws that apply to him. Every moment of his existence, whatever he does is automatically a new law of nature, albeit a uselessly inapplicable one—the law would apply to the one moment of its coinage, and no other. There are no existing laws that define the mass of a Particle Man, so he gets to decide what his mass is at any moment. Same with his shape and size, his interactions with other stuff, his trajectory though space-time. He can collapse himself into a singularity, or he can spread himself as a fine matter-mist throughout the cosmos. He can wink in and out of existence.

Whatever he does at any time is just what Particle Man does at that time, and as such is a law of nature with all the rock-solid authority of Ohm’s law. In fact, it is only whimsy on his part that he chooses to assume human form at all. It might be a pretty boring comic book, come to think of it—Particle Man would have godlike powers, far more than those of Superman (although the Green Lantern, in some interpretations, has approached this level of power). This is also a bit like the idea in The Fifth Element (1997), in which the titular element was essentially a human, but who stood alongside the other four Aristotelian elements (earth, fire, air, and water) as a fundamental, irreducible component of reality.

Surely, though, nature, which has been so tidy and parsimonious with its elementary particles and laws up to now would not create something so extravagant as Particle Man, casting off new laws willy-nilly, every instant of every day! Possibly, but our preference for neat, tidy, elegant systems, and our certainty that everything big and tricky must be made of things that are small and simple are not binding on nature. Indeed, our preferences along these lines have guided us to astounding achievements over the years, but have also left us with some blind spots.

Now, I do not think that Particle Man exists, at least as a whole man-sized solid thing. Rather, I suspect that a given unitary moment of consciousness consists of a Particle Man-like blob of—something. Consciousness just can’t exist in a world made of pure physics as we understand it (if a world made of “pure physics” is even a coherent idea). Panpsychism must be true. But a conservative panpsychism that posits (proto) consciousness at the lowest level and keeps it there is stopped dead by the combination problem. We are forced to go out on a limb and speculate as to how consciousness—as such, not just as an “emergent” property of functional dynamics—scales up beyond the individual physical particles.

Quantum Holism and Chaos

Quantum mechanics tells us about different systems of small particles coming together into such blobs that, although born of complexes of smaller things, and destined to fall apart into subcomponents in the future, are, must be, one single thing as far as nature is concerned. Quantum mechanics shows us real emergence, so-called strong or radical emergence in action, not just the may-be-seen-as emergence of the flock “emerging” from the birds, or the liquidity “emerging” from the motion of the molecules of water. To be useful to a full-blooded panpsychism, it doesn’t have to be a whole person, and it doesn’t have to be very long-lived—it just needs to be enough of a new thing to be qualitatively unique and causally efficacious on some macro scale.

Chaos theory (Gleick, 1987) tells us that the universe is chock full of situations in which neighbors do not map to neighbors, in which tiny differences make huge differences—the familiar Butterfly Effect. The panpsychist’s qualitative blob would not have to be very big at all to make the kind of difference we need it to make in order to push the brain around. It strikes me that, in our chaotic universe, a neuron would be a fine place to look for microscopic changes having macroscopic effects.

Holism and Its Discontents

It is now generally accepted that 65 million years ago an asteroid struck the Earth, and this impact resulted in the death of the dinosaurs. I remember learning when I was young, however, that when this hypothesis was first put forward, it met a lot of resistance. Apparently, the mainstream geologists took a while to come around, even in the face of almost overwhelming physical evidence of such an asteroid impact. I have since read that at the end of the 19th century, even among the Harvard faculty, there were still some old geologists who believed in the literal truth of the Christian Bible. These researchers looked for evidence of Noah’s flood in the fossil record, for instance.

As the biblical literalists died out or retired, their views became an embarrassment to later generations of geologists. So much so, that even in the 1960s, long after any living geologist could claim to have ever met a colleague holding these views, the world of academic geology held onto a reflexive aversion to any explanations that seemed Bible-adjacent, i.e. that involved single-day cataclysms. There would be no smitings, Great Floods, or destruction raining down from the sky in any of our theories, thank you very much. This more modern-minded cohort knew the kinds of theories they thought were nutty, but none of them had any individual memory of why they thought they were nutty.

We are all educated people living in a scientific age. There is a certain way of thinking about the world that comes naturally to us, since it has been drummed into our heads from elementary school onward. We are so used to it now that we don’t appreciate what a leap it was for us at one time, both culturally and individually. Children and primitive societies are natural animists, and anthropomorphizers. Cyclones are angry. The Earth is a loving mother. Even rocks possess a certain stoic wisdom.

Eventually, after thousands of years, and led by a few singular geniuses, we discovered a new way of thinking. This new reductive habit of thought consists of approaching every big, complicated thing as an aggregate of small, simple things that behave in consistent lawlike ways. Final causation was out, efficient causation was in. Since the time of Galileo and Newton, this kind of looking at the world has been spectacularly successful within its proper domain, and has led to what is legitimately called the Enlightenment and the Scientific Revolution (the internet, vaccines, people walking on the Moon, microwave ovens).

Here we are, a few hundred years later, and that revolution is still charging along, and we are all taught this way of thinking, whether we major in physics or not. We accept scientism as the default way of seeing the world. It is hard for us to imagine (or perhaps to remember culturally) just how hard-won and counterintuitive the new reductive ways of thinking are. Training ourselves to think like this was slow and difficult at one time. We have mastered it wonderfully, but it has left us with a residual knee-jerk reaction against anything that smells even faintly like the discredited old holism or anthropomorphism. Such ideas strike us as unseemly and embarrassing. Like the geologists in the 1960s, our self-imposed mental training has left us with a blind spot. Even if holism were staring us in the face, we would refuse to see it. There’s nothing like the zeal of a convert.



So far, I have been talking in a grand, sweeping way about fundamentals: basic physical principles, or even metaphysical principles, that would, possibly, form the underlying basis for the kind of universe in which we might find a phenomenon such as consciousness. I have argued that, among these fundamental principles, there must be some way of accounting for scaling up, other than (mere) levels of functional organization. That is, things can be big and complex in some holistic fashion, and not just by virtue of thinking of them as clumped into black boxes interacting over communications channels. I have gestured hopefully at quantum mechanics as occupying the place in our ontology where we should be looking (i.e. at a very, very low level), as well as exhibiting the kind of (really-there, not just may-be-seen-as) big-but-unified holistic scaling we need. Moreover, quantum indeterminacy allows us, if we squint, some wiggle room out of the causal closure of the physical world.

Can we bring this all down to earth a little bit? I am not a neuroscientist. You will not hear much about ion channels or receptors or tubulin microtubules from me, even though I am betting on some breakthrough or insight in that space at some point. Nevertheless, it is reasonable to want some kind of rough model of human cognition that incorporates Churchland’s “pixie dust in the synapses,” even if we don’t get all the way down into the wetware itself. How might the mind actually work?

Daniel Dennett

A good starting point is an initially counterintuitive but ultimately compelling conception articulated by Daniel Dennett. Dennett is the self-proclaimed captain of the “A” team, the king of the reductive materialists (he has declared David Chalmers the captain of the “B” team). His manifesto, 1991’s Consciousness Explained, is an absolute must-read for anyone interested in this field. It is extremely clearly written, persuasive, and loaded with style, a dry wit, and fascinating facts and findings relating to the study of the human mind. One simply cannot discuss philosophy of mind in any useful way without having some response to Daniel Dennett and his arguments. That said, one must occasionally rise above his characterization of his opponents as fearful, reactionary, silly people clinging to their vanities about the human soul.

It should come as no surprise at this point that I think Dennett is wrong, at least in some of his conclusions. It may come as something of a surprise, however, in this sharply divided field of inquiry, that I think that nearly all of what Dennett says in Consciousness Explained is right.

Dennett has no use for qualophiles like myself (this is the part I disagree with). But the vast bulk of the book is concerned not with arguments against qualia themselves, but against the idea that there is some Central Executive in the mind, some special module (either anatomically or functionally defined) that constitutes “my consciousness,” such that sensory inputs are distinctly pre-conscious on one side of it, and memories or motor outputs are distinctly post-conscious on the other side of it. Specifically, he takes aim at a naive conception of what is going on when “I” am conscious of “my percepts.”

The Infinite Regress of the Cartesian Theater

The most certain truth in the world is Descartes’ “I think, therefore I am.” Descartes was so sure of the existence of some kind of essential self that Dennett coined the term “Cartesian Theater” to describe the sense that we all have of being the audience enjoying the rich play of our experiences. The theater metaphor comes naturally to us. It sure seems as though there is a show going on, and it is plausible that there are lots of maintenance functions and subprocessing that our minds take care of “backstage.”

It is natural to carry the subject/object distinction from the real external world into our skulls. We tend to believe in an enduring self, independent of our individual percepts. Sometimes this purported “self” in our mind, the one sitting in the audience of the Cartesian Theater watching our thoughts and percepts, is referred to as a homunculus. This is not necessarily to imply that most of us believe that the self or homunculus is an identifiable region of the brain like the pineal gland, just that, at some level of organization, we naturally assume that there is a self that is separate from the stuff that self experiences, remembers, thinks about, etc.

For there to be a Cartesian Theater with a homunculus in the audience, information must come in from our sense organs (or from “outside” ourselves in any event, allowing for brain-in-a-vat type cases), thoughts must be generated and presented in some fashion to the homunculus, who experiences them. The homunculus, then, has the same Hard Problem relative to this presentation that we do relative to our sense organs. Any distinction we can draw between the homunculus and the percepts, any line between some receptors (however functionally construed) on the homunculus and those aspects of the percepts that these receptors are sensitive to, serves to push the whole problem down one more level, but doesn’t solve it. We still have a problem of how the stimuli impinging on the homunculus come together in its “mind” to form the rich qualitative field of consciousness that it has. Perhaps it has a homunculus in its mind too, watching its Cartesian Theater, and so on, ad infinitem. Dennett points out that under pain of infinite regress, there can be no homunculus in the audience of the Cartesian Theater separate from whatever is going on onstage.

Dennett’s Pandemonium

Dennett has an alternative to the Cartesian Theater metaphor, partly inspired by Marvin Minsky’s Society of Mind idea (Minsky 1985). He proposes what he calls the Multiple Drafts model, according to which there are lots of modules (or agents, or, more colorfully, demons), lots of versions or portions of versions of sensory inputs, and it never exactly comes together in any one place or at any one time in the brain to constitute “my field of consciousness right now.” Dennett often describes the mind as more of a pandemonium (literally, “demons all over”) than a bureaucracy or hierarchy. I’ll let him take it from here (Dennett 1991):

There is no single, definitive “stream of consciousness” because there is no central headquarters, no Cartesian Theater where “it all comes together” for the perusal of the Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of “narrative” play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its “von Neumannesque” character) is not a “hard-wired” design feature, but rather the upshot of a succession of coalitions of these specialists.

The basic specialists are part of our animal heritage. They were not developed to perform peculiarly human actions, such as reading and writing, but ducking, predator-avoiding, face-recognizing, grasping, throwing, berry-picking, and other essential tasks. They are often opportunistically enlisted in new roles, for which their native talents more or less suit them. The result is not bedlam only because the trends that are imposed on all this activity are themselves the product of design. Some of this design is innate, shared with other animals. But it is augmented, and sometimes even overwhelmed in importance, by microhabits of thought that are developed in the individual, partly idiosyncratic results of self-exploration and partly the predesigned gifts of culture. Thousands of memes, mostly borne by language, but also by wordless “images” and other data structures, take up residence in an individual brain, shaping its tendencies and thereby turning it into a mind.

The Center of Narrative Gravity

According to Dennett’s hypothesis, among the specialized modules in the brain there is a verbalizer, a narrative-spinner (some people call this module or something like it the monkey mind; I think of it as the chatterbox). The chatterbox produces words, and words are very potent or sticky tags in memory. They are not merely easy to grab hold of, they are downright magnetic. They are velcro. The output of this particular module seduces the whole system into thinking that what it does, its narrative, is “what I was thinking” or “what I was experiencing” because when we wonder what we were experiencing or thinking, it leaps to answer. The reports of this chatterbox constitute what we think of as the “self.” Dennett says we spin a self as automatically as spiders spin webs or beavers build dams. This very property makes this chatterbox powerful, and gives its narrative strong influence in guiding future action, thought, and experience, but it is a mistake to therefore declare it to be the Central Executive.

Dennett likes to say that what we call the “self” is really just a “center of narrative gravity,” and, as such, merely a useful fiction. In the same way, an automobile engine may have a center of gravity, and that center of gravity may move around within the engine as it runs. The center of gravity of the engine is perfectly real in some sense—one could locate it as precisely as one wanted to—but in another sense it does not really exist. It performs no work. It is what I might call a may-be-seen-as kind of thing, not a really-there kind of thing. Dennett thinks that the self is the center of narrative gravity in exactly this sense.

What Pandemonium Gets Wrong or Leaves Out

I tend to agree with Dennett that thought is a lot less linear and single-threaded than we think it is, and that there are a lot of competing/cooperating specialist modules at work. His evocative term “demon” actually hearkens back to the idea of concurrent computer processes, as certain types of these are called “daemons” in the UNIX and Linux operating systems. While there is a lot to like about this conception of the mind from a cognitive point of view, it nevertheless leaves many questions unanswered, even in its own terms.

The remainder of this chapter is necessarily a bit speculative, and not just in the metaphysical sense. I’d like to poke a bit at this Multiple Drafts/Pandemonium idea, riffing a bit, perhaps not to the extent of filling in its gaps definitively, but to raise some questions that it forces upon us. This chapter is not about biting bullets, really, in that I am not asking you to entertain anything much more outlandish than Dennett himself does (well, not too much more, at any rate).

How Many Demons Are There?

Dennett (and Minsky) both like the idea of non-conscious, simple, specialized units whose collective and self-organizing behavior produces a convincing simulation of what we might call a conscious self. I suspect, however, that Dennett imagines his demons as rather more crisply defined than he explicitly argues for. If the whole Pandemonium idea is true at all, I wonder if the demons are a bit more analog and blobby than Dennett seems to think.

Moreover, the Pandemonium image forces us to grapple with a lot of mereological questions that Dennett does not dwell on: parts vs. wholes. How do we carve the mind at the joints? How can we individuate parts, as one might a car engine? An identifiable part contributes to the whole, but in such a way that you could swap it out, perhaps with another part that did the same job but was made of different stuff, or had a different design. Dennett doesn’t like the idea of the mind as a monolithic whole, so he posits demons as clearly delineated subunits. Even he, however, hints at some ambiguity as to the enumerability of his subunits when he invokes “a succession of coalitions” of demons. There is some sense that there may be a changing and perhaps indeterminate number of parts doing their work at any given moment.

Voltron I accept that there could be lots of demons in my mind, perhaps that entirely make up my mind. It is certainly a great metaphor. But it stays at the level of evocative metaphor, along with that of the raucous parliament (remember those coalitions) until we specify it a bit more. We still need to ask how these demons might not be like individual, you know, demons, or anything subject to the constraints that actual biological beings might live and die under.

How many demons are there? What delineates a demon? How do new ones come into existence, and do they ever die? To what extent do they compete, and what happens to the losers? To what extent do they cooperate? Some people use terminology that suggests that the mind is a “Darwinian memosphere.” Does natural selection work on demons in the same way that it does on species? Can demons mate and produce offspring? Or can they simply merge together like Voltron? Do individual demons change over time, or adapt?

After allying themselves to accomplish something, do Dennett’s coalitions fall apart into exactly the same set of individual demons that went into them, or does the Voltron/coalition demon get to stick around, added to the menagerie along with its component demons? Perhaps a demon can sort of will itself to have new powers, unlike biological beings. Maybe they reproduce in a way that is more like mitosis than sexual reproduction. To what extent do patterns of relations between demons harden and become new demons themselves?

What do demons want? To the extent that they compete, what resources are they competing for? How, if at all, do they stack, or nest, or apply themselves to one another? What are the channels of interaction between them? Do they all have mutual visibility, as if they are sitting in a huge stadium watching each other? Does any one demon, or coherent coalition, hold the floor at any given time? It may well be the case that the channels of communication are not instantaneous, and not all global, and that the longer a given signal sticks around, the more broadly it gets propagated. Are the signals themselves between the demons just more demons? If not, how should we think about those communications channels? Or is it really more like a jungle, in which demons happen across one another from time to time, interacting sporadically?

Darwinian Memosphere of Demons

I like the idea of active demons, even if most of them live in the shadows most of the time, rather than the mind as some kind of executive presiding over a mountain of static data. The mind is clearly great at parallel processing, but even that understates the situation, I think (as does Dennett).

When we are given a fact, say, that contradicts something we know, even somewhat indirectly, it is remarkable how quickly we notice. If we learn something new and surprising about cars, it is hardly plausible that we serially run through all the thoughts and memories involving cars (individual cars as well as general car knowledge) in our minds and adjust each of them accordingly. Facts, memories, are recalled as needed, as if by magic. It’s as though an old fact jumps up, as if offended, to take on the newcomer. Old thoughts are less like dead data waiting to be accessed, searched, sorted, or applied, than like little sparks of mind themselves, capable of asserting themselves.

I suspect that demons can jump on and apply themselves to any detail of a new or developing stimulus (thought or percept) that catches their fancy. In this way, they get to flesh out the “focused-upon” detail more fully. However, over-eager demons get smacked down. Demons can jump on the stage, applying themselves whenever they want, but there is a cost. If they are just spamming, applying themselves when they have nothing to contribute, they may strengthen a counter-demon response and get extra tuned out in the future (more on this in a moment), or they may get corrupted or diluted somehow. In order to survive intact over the long term, demons must tiptoe through the minefield of existing demons without stepping on anyone else’s tail or hoof.

Broad and Narrow Niches

I speculate that different demons have different niches in the memosphere. Some are swaggering alphas, which apply themselves broadly and promiscuously to whatever processing that needs doing, while some are rarely seen, and just stay in their tiny niche, with very specific criteria for activation. According to this notion, a swaggering alpha’s identity may be so smeared out and indeterminate that it hardly has an identity left, just the barest shape of one, a tone or coloring it can impart. (The notions of causation or object permanence might be such demons.)

While at the other end of the spectrum, the den-dwelling, seldom-seen demons get to keep their specificity in sharp detail (like specific episodic memories, or particular skills). Perhaps the alphas are more appropriately seen as eager beavers, willing to trade quality and specificity for sheer quantity and frequency, whereas the den-dwellers make the opposite call. As in nature, different demons employ different strategies and make different evolutionary tradeoffs, until just about every conceivable niche is filled.

Just as it may weaken or corrupt a demon to apply itself overly broadly, demons may be similarly insulted by allowing other, incompatible demons to contribute to a developing thought. There may be, for instance, a demon that enforces or embodies what we think of as a valid chain of logical inference, and it will not tolerate another demon that violates its criteria for a valid chain of inference to activate itself. To allow such a thing would be to make it less likely that the valid-inference demon would be allowed to apply itself in the future. In this way, demons collectively constitute rules or constraints on each other. A truth I am certain of, or perhaps a symbol I know how to interpret may be a rock-solid demon that will simply always win competitions with other demons.

What sorts of current thoughts create a hospitable niche for subsequent thoughts? I suspect that the answer is far from deterministic, or, rather, that it is chaotic: you never know what details or seemingly unimportant aspects of a thought or percept will grab hold of a demon’s fancy and take you in a whole new direction. In particular, it is not necessarily the overall big idea or perceived direction a current thought is going in that subsequent thoughts hook into, but those aforementioned distractions, even if most of the distractions do not go anywhere interesting and wind up being dead ends. Moreover, I think that demons do not necessarily have a preference for high-level deployment as opposed to low-level filling-out the detail of some thought or percept—they just like a good fit.

So: the demons that are maximally compatible with all the existing demons are allowed to apply themselves relatively unchallenged. Each new moment of consciousness is new and unique, however. So it is not the case that old demons simply get to relive their glory days in the spotlight. More likely they get to inform the creation of a new demon—they get to be the primary parent, or chief architect. Each incumbent demon is like a craftsman, or a specialized muscle that shapes a new demon. When I drive down my street and see an object in my field of vision that ultimately resolves to “house,” it is probably not the case that my “house” demon simply grabs the spotlight; more likely it helps spawn a new moment of consciousness, a new, yet distinctly housey, demon.

Although some are more like specific memories, some are more like general facts or strategies. Some are more algorithmic/prescriptive, and others more data/descriptive, on a sliding scale. Each has a bit of “what is it like?” and each has a bit of “what does it do?” Indeed, it is hard to separate the two aspects. The Pandemonium image blurs the distinction between immediate sensations and memory, which to my way of thinking is one of its virtues. Memory is smarter and more active than is generally supposed. Memories are not in cold storage, off in a filing cabinet, but right in your mind now, pressing on your consciousness.

We often speak of our minds containing models: models of reality, models of self, models of my cat, etc. What sense can we make of such talk if our minds are constituted by demons? Are there models at all, if each new moment of consciousness is whipped up on the fly dynamically? I feel comfortable saying yes. Any model is a black box with an interface. You ask certain questions in the right way, and the model gives you consistent answers. A model may be implemented by a static table of bits or a database, with a relatively mechanical query engine, or it may be implemented by a raucous parliament. Our “models” may not be as model-like as we suppose.

The Spotlight of Attention

So what are the selection criteria for letting demons on the stage? Which demons do get promoted to the inner circle? Whoa—what inner circle? Alright, yes, there is no Cartesian Theater, not exactly, but even Dennett acknowledges that there is something like a consensus that forms (and pretty quickly at that) about what the narrative center of gravity is (or was) at any moment in my mind. This may be an artifact of the narrative-spinner demon, the chatterbox, and may not mean as much as it seems with regard to what “I” am thinking, but there is something to the notion that I thought about Fluffy today, but had not in the week before today. There is something like a spotlight of attention on certain trains of thought, even though (as I suspect) there are lots and lots of other trains of thought going on at the same time.

While I am here, I should just say that the proverbial spotlight of attention is a bad metaphor, even though I just used it. Attention is actively created, not passively observed. The spotlight metaphor wrongly implies that the thing attended to in the mind already exists, in all its detail, in the dark before the spotlight is shone upon it. In a way, the image of a spotlight of attention is a continuation of the Cartesian Theater. Where was the last time you saw a spotlight, drawing your attention? Probably the theater.

Rather than imagining that all of our thoughts, percepts, memories, etc., are all there, fully realized, but in the dark until their moment in the spotlight, it is more likely the case that we function as a sort of just-in-time mental reality generator, creating things on the fly as we “turn our attention to” them. That said, it is hard to stop using this image, just as with the Cartesian Theater itself, for the same reason. There is some sense in which “I was thinking this” or “I was not aware of that, but I am now, for purely internal reasons.”

The idea of demons having to pay a price for inappropriate activation may help improve the “spotlight of attention” metaphor. As a demon, you get to create the spotlight any time you want, making other demons conform to you, just as any loser can pull a fire alarm. Seizing attention is really a way of corralling or bullying other demons into trying to apply themselves to you, even at a cost to themselves of less-than-appropriate activation. Depending on the situation, seizing attention is like issuing an “all hands on deck” with more or less urgency.

Attention, then, isn’t some spotlight being shone on a particular demon, but is the collective combinations of lots of demons, perhaps with one at the center as a ringleader or catalyst. Things like pain or a threat tend to focus the attention. This may be a way of having one imperative light a fire under all the demons, in effect shouting at them, “I don’t care if this doesn’t fit your criteria of applicability! Find a way to apply yourselves to this situation, however suboptimal it is to you!”

Synthesis/Analysis Feedback Loop

There is one more wrinkle that I want to add to the Pandemonium model. On the one hand, there is this idea that parts of my mental processing are performed by somewhat autonomous demons. On the other hand, there is a strong sense that there is some kind of “what I was/am thinking,” even if we jettison the homunculus in favor of a “center of narrative gravity.” Dennett puts this on a sliding scale, speaking of “fame in the brain.” Once enough demons are bought into a particular thought or interpretation of something, some version of the narrative becomes relatively “famous.” As we construct our thoughts and percepts, how does their development coincide with this kind of fame?

As I am taking in a complex percept, I have to synthesize perceptual fragments into some kind of whole. Different sense modalities get bound. As I discriminate edges, light and shadow, colors, then shapes, tables, chairs, pine-scented air fresheners get recognized as such individually as well as belonging in the larger context. There is no naive perception, so, along the way, I (or my demons) do all kinds of scrubbing, smoothing, guessing, extrapolating, etc. I am convinced that even pretty simple perception is more creative than it is generally given credit for.

We get a lot of messy, noisy, patchy data from our senses, and various demons (or Dennett’s coalitions of demons) take a stab at cobbling different parts of it together into larger coherent (to them) chunks, discarding outliers, making inspired guesses. Eventually, they synthesize a whole bunch of data into a single, unified percept, complete with tendrils of association and valence, framing and background knowledge: ah, that gray blob is a veterans’ memorial. On the way to that unambiguous, stable, solid interpretation, however, there was a lot of thrashing around.

Whatever they come up with as a single interpretation, that final, unified percept, is only a first pass. This recalls Dennett’s Multiple Drafts idea, although he is a bit vague about how rough drafts get edited. I imagine that as soon as anything like a draft emerges, it gets attacked, more or less. Other demons try to break it back down again, along fault lines that they choose, not necessarily into the original components it was synthesized from. This becomes an iterative loop, with the same percept being built up and broken down, with possible subloops happening along the way. Stability (the “final” draft) happens when the result of the synthesis phase of the process no longer differs in successive loops—a consensus has been reached.

For unambiguous input, there are few iterations, little demonic controversy, and the processing is more or less automatic and unconscious. It is the more ambiguous, complex cases that take longer to stabilize, that end up engaging more and more demons. This is a slightly different take than Dennett’s fame in the brain, in that it’s the real battles that get famous.

Background Demons

Have you ever been listening to an oldies station, and heard a song that you have not heard in years or decades, but had the distinct sense that the very same song was going through your head sometime in the past week? Of course you have. I have had this sense suspiciously often. Often enough, in fact, that I have a hard time believing that I actually just happened to be replaying all those songs consciously in my memory in the few days before I heard them on the radio.

As I go through my life, I have a sense that my mind is not monolithic, that there are parts of it working away offline. Not only do results of these offline computations pop into my main stream of consciousness (however you might construe that term), but there is a definite sense, in me anyway, of a whole train of thought, in all of its what-it’s-like-to-see-red glory, being plugged into whatever else I was thinking about or experiencing. Such trains of thought come complete with a sense that they didn’t just come into existence at the moment “I” became aware of them, but that they had been developing on their own for some time.

Now of course this sense could be an illusion. As with déjà vu, I could be misremembering, mis(re)constructing my own mental history. But let’s go with this for a moment. This palpable sense of past mental history that gets retroactively grafted onto your “main” consciousness makes a lot of sense if your consciousness is made of semiautonomous demons. I think that all of my song memories are possibly playing all the time, but “I” am not aware of them. And if song memories work this way, what other memories are on hot standby? Is there a “Dancing in the Moonlight” demon, who just sings that song all the time, forever until you die? I can’t rule it out. It may be that all of our old moments of consciousness are still in there, as standing waves of some kind.


How do you ever get a thought in edgewise, with all these demons singing? Not to mention the ones thinking, remembering, and sensing your shoes through the soles of your feet. I suspect that you (or perhaps I should go with the scare quotes, “you”) tune them out. Like the jackhammer outside your window that you don’t realize is deafening until it stops, it’s not as if the demons go away or stop, but after a short while they just don’t impinge upon “you” anymore, unless it would be a good idea for them to do so. I am legion and I contain multitudes. I know that some consciousness happens, but I don’t necessarily know how much more consciousness happens that “I” don’t (need to) know about. At one time it took a lot of concentration for me to tie my shoes, but now I could almost do it in my sleep. I constructed a tying-shoes demon, and when I tie my shoes, somewhere in my mind, it is hard at work, concentrating like mad (although I can willfully focus my attention on the act of shoe tying and make it more globally conscious).

Epistemically Hungry Agencies

When I walk into a room, I may not consciously notice each of the fire sprinkler heads mounted on the ceiling. Do I see them? Even after a good look around, I would likely flunk if quizzed about their exact number or arrangement, even though I feel as though I have seen the whole room, in all its detail. Dennett says that this feeling is illusory. I choose to say that the sprinkler heads do not intrude, as it were, on my consciousness because, insofar as I care, there is nothing about them that should surprise, interest, or concern me. I’ve noticed them—if I had never seen or heard of a sprinkler head before, within a very few seconds upon entering the room they would command my full attention—but as it is, I’ve written them off at a relatively low level of perception. At some point in my life, I’ve noticed them, thought about them, stared at them during dull staff meetings, convinced myself that I more or less understand them. In effect, I have constructed a demon—a sprinkler head recognition agent. When I enter and scan a room, this agent is awake, active, but quiescent. Nevertheless, it contributes in some admittedly poorly understood way (by me at least) to where I’m at, consciously.

I have an overall sense that I see and comprehend the room. If I had the mind of a dog, I might still have some sense that I see and comprehend the room, even though the sprinkler heads never registered at all, much beyond the firings of the rods and cones on my actual retina. My dog mind has no sprinkler head recognition agents, nor does it have any particular curiosity about details it does not recognize (no epistemically hungry agencies, to use Dennett’s term). My human sense that I see the room and my satisfaction that I understand it are quite different than the dog mind’s sense, even though in the end we are both satisfied that we see and understand it. I see and understand insofar as I care, have ever cared, or could imagine caring about whatever it is I am looking at.

Active percepts, not just past memories, are demons. You tune out the actual shapes of the trees on the side of the road as you drive to work each day, the colors of the houses on your street, etc. Your eyes pick up all these details (that is, the corresponding photons do actually strike your retinas), and somewhere there is a perceptual demon who, according to this way of thinking, is exquisitely conscious of all that stuff, but “you” aren’t aware of it, unless there is a conscious effort at attention to such details.

You know how sometimes you remember an event from the distant past, and you are not sure if you are actually remembering the event or remembering your subsequent remembering of it on other occasions? Your memorable recalling of it in the past has effectively jammed the original memory. Any toehold or reference tag that would have triggered the original memory will also now trigger the memory of the memory. The original has been masked. Was Fluffy yellow? I always thought of him as yellow. But Mom has a photo and he’s black. Oops. Demons who cry wolf get ignored later (or countered more vigilantly).

I have a strong suspicion that a great deal of the mind’s activity is inhibitory. We spend an awful lot of effort shutting down streams of information, channeling activity, blocking and constraining. It strikes me that, to borrow an image from the memeticists, the mind is like an organism under constant assault by viral memes (demons). We tune out the singing demons by quickly developing antibodies to them. If the “Dancing in The Moonlight” demon sings the same song in the same way for too long, we jam the signal by installing a counter-signal, a counter-demon. We handicap; we compensate. It doesn’t stop, but we accommodate it by adjusting for its constant presence. And of course, even though I speak of singing demons, this goes for the remembering-my-childhood-cat-Fluffy demon as well, and the demons that notice the trees along the highway. The demon and its meme-jamming anti-demon are locked in a self-canceling embrace forever, leaving the mind as an intricate balance of tensions, like a bicycle wheel.

This idea of demons/antidemons (or memes and antimemes) respects a couple of ideas. First, as mentioned above, it helps make sense of what Ned Block calls perceptual overflow, those conscious-but-not-conscious scenarios people have devised over the years: the ticking clock you are not aware of until it stops, the pattern of the design on the carpet, the sensation of your socks against your ankles. Your “peripheral” awareness of such things is in there, and part of your overall conscious field, but neutralized by an antimeme.

Most importantly, this idea of demons and antidemons respects a sense of holism in the mind. The mind, according to this conception of it, really is one unified thing, with a balance of tensions keeping much of it more or less inert at any given time. All the “parts” (demons, sensations, memories, whatever) are always right there, as part of your all-at-once now, but tuned out, or counterbalanced. Each one is not off somewhere in its own soundproof room, dormant or disconnected until needed. They are all there, all the time, fully patched in. We actively exert ourselves to cancel them out, jamming them with an anti-signal, and this exertion is a collective exertion, performed by other demons: the mind as a self-policing pandemonium, but a whole thing. As you know by now, I think holism is important, and I will come back to this theme shortly.

Fine, but What about Consciousness?

For all the reasons laid out in this book so far, you can’t get phenomenal consciousness from the causal interactions of functionally construed subunits, whatever you call them: agents, black boxes, or demons. Dennett has proposed a convincing solution (or at least a good image that suggests a solution) to Chalmers’s “easy problems,” but it leaves the Hard Problem untouched. At this point, Dennett would get exasperated, and insist that no, once you have solved the easy problems at this level, there is no Hard Problem. Moreover, I think Dennett would claim that any talk of qualia implicitly entails the existence of a homunculus sitting in the audience of the Cartesian Theater, and we’ve already dismissed that image with its infinite regress.

Strangely Swimming Conscious Demons

My own speculation is that the demons (the epistemically hungry agencies) are conscious, in the whole-hog Hard Problem qualophilic sense. The sprinkler head recognition agent feels quite clever, that it has made a really creative leap. It has never seen these particular sprinkler heads, in this light, from this angle, in this context, yet it declared them to be sprinkler heads. It is always thinking about sprinkler heads, and always looking for them, always trying to see them.

When I look at my living room, I seem to have a certain sense that I see it before me in all its colorful, varied entirety. What is the connection between this “certain sense” and actually seeing it? My sense of seeing it is not an opaque ability to answer questions. I don’t feed demands for information into a black box and get information back. It may well be, as Dennett says, that a pandemonium of demons (couch demon, rug demon, lots of other, more abstract demons concerned with context and associations) in some way contribute to my overall comprehension. Moreover, it may well be the case that this “overall comprehension” just is the pandemonium itself, not some master demon, or some Central Meaner.

Maybe later, if asked what was going through my mind, the “I was comprehending my living room” demon may be overruled by the “I was worrying about my property taxes” demon. Maybe I was comprehending the living room, but, come to think of it, I was paying special attention to the drapes. Or was I? Maybe any of the demons could make a good case that they were the whole point, the where-it-all-comes-together. From each demon’s point of view, it is right. We have lots of seats of consciousness in our minds.

If all the demons are conscious to some degree or another, if that term is to have any meaning at all, then there are some consciousnesses that never manifest themselves distinctly in any kind of a master narrative of “what was going through my mind.” Perhaps some of them are evolutionary dead ends in the pandemonic Darwinian jungle that is my mind. Maybe some of them don’t even nudge any of the others above the level of random noise or jitter, even though, for their possibly quite brief existence, they were conscious. There was something it was like to be them.

At one point (pp. 132–133) Dennett speaks about the impossibility of phenomenal consciousness that “you” aren’t conscious of:

We might classify the Multiple Drafts model, then, as first-person operationalism, for it brusquely denies the possibility in principle of consciousness of a stimulus in the absence of the subject’s belief in that consciousness.

Opposition to this operationalism appeals, as usual, to possible facts beyond the ken of the operationalist’s test, but now the operationalist is the subject himself, so the objection backfires: “Just because you can’t tell, by your preferred ways, whether or not you were conscious of x, that doesn’t mean you weren’t. Maybe you were conscious of x but just can’t find any evidence for it!” Does anyone, on reflection, really want to say that? Putative facts about consciousness that swim out of reach of both “outside” and “inside” observers are strange facts indeed.

Yes, yes they are, but there it is. We know that qualia exist, in the true blue maximalist sense. Moreover, the self is an unreliable narrator. If some qualia exist that we definitely know about and have cognitive access to, and figure into our ongoing selfy narrative, it is not crazy at all to think that there may well be other qualia that we don’t know about, at least not insofar as they are patched through to that chatterbox narrative-spinner. There are, in fact, consciousnesses within my skull that swim out of reach of any demon or collection of demons that might generate utterances or typings about what “I” am or was conscious of at any particular time.

This should not seem odd, frankly, even to someone like Daniel Dennett. However you define consciousness, assuming you find any use for the term whatsoever, why is it impossible, or even unlikely, that the submodules and sub-submodules that comprise my mind might themselves individually qualify as conscious? And if they do qualify as conscious, they might not all necessarily be patched into any larger consciousness, or feed into any higher level of consciousness (or perhaps, each one is not necessarily in the winning coalition in every election or debate). Of course the ones that do are probably more interesting to us, and how exactly they feed in is a subject for further speculation. And perhaps some of them spin off on their own until asked a certain way, or until the right kind of slot opens up for them to contribute their bit (recall Dennett’s constantly shifting coalitions of demons). So it should not seem silly or bizarre that, in some sense, I was conscious of a stimulus but didn’t know it. Or perhaps the “I” that reports on such things did not know it, or know it in the right way.

The Players Are the Audience

At this point, I want to pull back a bit. I mentioned mereology above, and I think that the specter that lurks over this chapter is that of compositionality. Dennett would say that demons don’t have to compose (at least, not beyond his possibly ephemeral coalitions). He would say that any talk of qualitative experiences that implies a scaled-up master experiencer falls prey to the infinite regress of the homunculus in the Cartesian Theater.

I agree that there is no distinct homunculus. So what is the explanandum here, the percepts or the perceiver? I’m with William James on this: the thoughts themselves are the thinkers. The memories are the rememberers, the experiences are the experiencers. While this must be true, when I see a red apple, the thought is not of a red apple; it is of an observer seeing a red apple. The self of which we are aware when we claim to be self-aware is a simulation, constructed as part of our perceptual and cognitive apparatus, built into the percepts. The actors on the stage are the audience. I am the scene on the stage of the Cartesian Theater. James also suggested that instead of saying, “I am thinking,” it might be more appropriate to say, “it is thinking,” using “it” in the same sense that we use it when we say “it is raining.” I might add to James’s suggestion that in particular, it is thinking you. The sense of this is summed up in a quote by Johann Gottlieb Fichte that I found on page 93 of Strawson (2009):

The self posits itself, and by virtue of this mere self-assertion it exists; and conversely, the self exists and posits its own existence by virtue of merely existing. It is at once the agent and the product of action; the active, and what the activity brings about; action and deed are one the same, and hence the “I am” expresses an act.

I realize that, throughout this book, it can seem a little unclear just what I take the explanandum to be. I started by talking about the redness of red, and sometimes it seems that I am interested in exploring the qualitative properties of experience, but sometimes it seems that I am interested in the act of experiencing itself, and at still other times I am interested in the self doing the experiencing. I have a pretty big-tent notion of qualia, and I am skeptical of the distinction between those three things. We know, with Cartesian certainty, that there are events of qualitative consciousness. So my answer to the question of which of those I think is the central mystery is “yes!”

Sometimes I imagine the perceiver/self as a gelatinous pseudopod-like thing, assuming the shape of whatever different thoughts it has. This notion of the unity of perceiver and percept also explains, to some extent, the troublesome second-orderliness of consciousness: to see red is to know that you are seeing red. In general, it seems mysterious that experiencing is inseparable from knowing that you are experiencing, that you can’t see the apple without also having a sense of yourself as an experiencing self. This mystery goes away if the self is a construct created specifically to bring about exactly this effect. We call the self into being precisely to be the subject of our experiencings, to give them an anchor, a point of view, to make sense of them.

The Self

This deflationary attitude about a persistent, unified self, something most of us believe in pretty solidly, sounds a bit off, or at least hard to get your head around. But it isn’t that far out. Just as the mass audiences who watched The Matrix had no trouble with brain-in-a-vat, most people are actually pretty comfortable with the idea of ego dissolution (or ego death). People who meditate, take various drugs, or have mystical experiences of one kind or another—or even play certain virtual reality games—approach this first-hand.

I have asserted that my experience of the redness of red is a really-there, objective fact about some event in the universe, one we should be able to explain, and that this experience of redness still needs to be explained regardless of whether there really is a red apple in front of me, or I am the victim of an illusion. My attitude about the self is that the self is more like the apple than the redness. It sure seems like there is a me, and that seeming is real, but the me that the experience is about? Well, I’m not so sure.

This is well-trod ground. (Please do yourself a favor and look at this comic strip from the brilliant Saturday Morning Breakfast Cereal series: Tor Nørretranders called his book The User Illusion (Nørretranders 1998). The titular illusion is the interface our minds construct for us to make sense of the world, and that user illusion is, primarily, that you are a you. Like Dennett, he likens your sense of being a self to an icon on a computer desktop, a little avatar that presents an interface designed in such a way as to facilitate useful manipulations of whatever is going on under the hood. Similarly, Anil Seth (2021) takes a similar tack (he’s not big on qualia either, but that’s a different story).

Thomas Metzinger (2003) has a pretty good working out of this general idea. Roughly, when a control system (like our brains) evolves to a level of complexity where it includes a model of itself in its overall model of reality, any “self queries” are actually queries of this self-model. There is a homunculus, but it is a non-regressing kind, a cognitive convenience for us. Just as in the physical world, I know that a tree is not-me, but my arm is me, we posit this little self in our mental world, and transpose the same kind of relationships to it: my experience of redness is not-me. Now let’s see how the me reacts to it.

Self as Status Register

A CPU, the real brain of a computer, has a bunch of so-called registers, just slots that can each hold a single number (generally a few bytes). They are named things like register A, register B, and so on. At each tick of the central clock, the CPU performs some operation or another, often involving these registers. It may add register D to register E and put the result in RAM (relatively distant, much bigger, memory), or something like that. Maybe it performs some other operation on some register. Maybe it fetches a number from RAM and sticks it in a register. Maybe it just examines a register.

There is a special register called the Status Register. Each bit in the Status Register reflects something about the most recent operation the CPU carried out. Was the result of the last operation 0? If so, the “zero” bit will be on in the Status Register. Was the result of the last operation a negative number? If so, the “less than zero” bit will be turned on in the Status Register. This information is then available to the next instruction. This is how conditional branching instructions work: “if some counter hit zero, now go do this other thing.”

The point is that this Status Register has to be artificially set up by each instruction, along with whatever else it is doing: sure, add register D to register E and put the result in RAM, but also make sure to set all the right bits in the Status Register. The Status Register doesn’t some for free. Programmers at that level can think of it as just an honest reflection of the state of the CPU and the status of the last instruction, but this is different from looking at a pile of rocks to see how big it is. This “honest reflection” is itself the result of engineering, verified by testing, and someone, somewhere along the line, could screw it up and set those status bits wrongly. It all works wonderfully, but when you look at the Status Register, you aren’t actually looking at the status of the CPU; you are looking at a carefully crafted display board that is presented to you on a need-to-know basis. Nørretranders, Dennett, Seth, Metzinger, and others see the self as being a lot like the Status Register in a CPU.

Do Qualia Beg the Self Question?

As I mentioned above, Dennett would say that I can’t thread the needle here: I can’t say that qualia are real (in the really-there sense that I do, in fact, think they are real), while waffling on the status of the self. Once you are in the qualophile camp, you have already committed yourself to believing in a unified Central Experiencer. I am optimistic that Dennett is wrong about this, but I admit that I (or someone) still has to get a little more specific about how I can be so very sure I experience redness, but not so sure about the “I” part. To what extent does experience imply an experiencer? I believe that there is an experience of an experiencer experiencing a red apple, and maybe the apple is an illusion, and maybe the experiencer is also an illusion, but the experience is, must be, real. As I said: more detail needed.

Holism—The Real Sticking Point

My thoughts and percepts are one thing. I am sure of this, however fuzzy the edges may be in any individual case. If there are demons, they do stack, they compose, in some way, to produce a…thing. A real, really-there thing, not just a may-be-seen-as thing. Somehow, the “self,” if we can even call it that, does not sit apart from the performance on the stage of the Cartesian Theater, but incorporates it, in all its detail, in addition to the comprehension of those details in terms of a “big picture.” There is holism at work in minds, and there is some kind of fundamental e pluribus unum separate-yet-part-of-the-whole stuff going on.

Dennett hates this idea. He once said, “When everything is held to merge with everything else, when there are no clean joints at which to carve nature, science tends to wind down to a lazy halt: holism as the heat death of science.” I sympathize, I really do. Holism can be a “get out of jail free” card. You just wave a magic wand and say “holism” and you can explain anything you want by declaring your intention not to explain it at all. The whole point of inquiry, scientific or philosophical, as Sellars said, is “to understand how things in the broadest possible sense of the term hang together in the broadest possible sense of the term.” This necessarily means analyzing things, breaking them down, figuring out how to describe some things in terms of other things, and thereby gaining insight. None of that happens as soon as we invoke holism, as we do when we see a big, seemingly complicated block of (something) in front of us, and we gesture at it and say, like a laid-back frat bro, “It is what it is, dude.”

As I’ve said before, the problem with this (often justified) antipathy for holism is that it leaves you unable to accommodate a scenario in which the world just happens to exhibit genuine holism. In the case of consciousness, that is exactly what we have staring us in the face. We may have to sharpen our conception of how we have these really-there, all-at-once experiences in the absence of a Central Meaner or a homunculus, but I think we can and must do that.


Cognitive Qualia

Even people who accept the Hard Problem as real still often make a distinction between cognition on the one hand and qualitative subjective consciousness on the other. Cognition, presumably, is amenable to analysis in terms of information processing, and may in principle be performed perfectly well by a computer. It encompasses Chalmers’s “easy problems.” Subjective consciousness, or qualia, is the answer to “what is it like to see red?”—i.e. the Hard Problem. Qualia is the spooky, mysterious stuff that no purely informational or functional description of the brain will ever account for.

I would like to either clarify or eliminate the distinction. What exactly do we mean by “cognition”? When we speak of cognition in a computer, is it really the same thing that we are talking about when we speak of cognition in a human being? When we speak of “cognition” and “qualia,” what are the distinguishing characteristics of each, such that we can be sure that some event in our minds is definitely an example of one and definitely not the other? The line between what we experience qualitatively and what we think analytically or symbolically is very hard, if not impossible, to draw. Even with the most purely qualitative impression, there is a troublesome second-orderliness—there is no gap at all between seeing red and knowing that you are seeing red.

My Phenomenal Twin

Recall that my zombie twin is an exact physical (and presumably cognitive) duplicate of me, but without any subjective phenomenal experience. It walks and talks like me, and for the same neurological reasons, but is blank inside. There is nothing it is like for it to see red. Horgan and Tienson (2002) suggest an interesting thought experiment that turns the zombie thought experiment on its head.

Imagine that I have a twin whose phenomenal experiencings (i.e. qualia) are identical to mine throughout both of our whole lives, but who may be physically different, and in different circumstances (perhaps an alien life form, plugged into the matrix from the movie The Matrix, or having some kind of hallucination, or a proverbial brain in a vat). The question that screams out at me, given this scenario, but that Horgan and Tienson do not seem to ask (at least not in so many words) is this: to what extent could my phenomenal twin’s cognitive life differ from my own? If the what-it-is-like to be it is, at each instant, identical to the what-it-is-like to be me, is it possible that it could have any thoughts, beliefs, or desires that were different from mine?

Now we may quibble over defining such things in terms of the external reality to which they “refer” (whatever that means, and I certainly will quibble quite a bit in chapters to come). And we may decide on this basis that my phenomenal twin’s thoughts are different than the corresponding thoughts in my mind, but this is sidestepping the really interesting question. Keeping the discussion confined to what is going on in our minds (that is, my mind and that of my phenomenal twin), is there any room at all for its cognition to be any different from mine? If every sensation it has, every pixel of its conscious visual field, every sensation, every emotion, everything of which we might ask “what is it like to experience that?”, is identical to mine, is it possible that while I was thinking about how to get rid of the wasps in my attic, it was thinking about debugging a thorny piece of software? (Charles Siewert (2011) makes similar points in his discussion of what he calls totally semantically clueless phenomenal duplicates.)

Think of a cognitive task, one as qualia-free as possible. Calculate, roughly, the velocity, in miles (or kilometers) per hour, of the Earth as it travels through space around the Sun. Okay. Now remember doing that. Besides the answer you calculated, how do you know you performed the calculation? You remember performing it. How do you know you remember performing it? Specifically, what was it like to perform it? There is an answer to that question, isn’t there? You do not automatically clatter through your daily cognitive chores, with conclusions and decisions, facts and plans spewing forth from some black box while your experiential mind sees red and feels pain, and never the twain shall meet. There is, of course, a ton of processing that happens subliminally, just as there is a ton of perception that happens subliminally. Nevertheless, just as there is a definite what-it-is-like to taste salt, you are aware, consciously, experientially, of your cognition. But what exactly is the qualitative nature of having an idea?

Necker cube illusion duck/rabbit illusion

David Chalmers has asked whether you can experience a square without experiencing the individual lines which make it up. This question nicely underscores the blurriness of the distinction between qualia in the seeing-red sense, and cognition in the symbolic-processing sense. When you see a square, there is an immediate and unique sense of squareness in your mind which goes beyond your knowing about squares and your knowledge that the shape before you is an example of one. What is it like to see a circle? How about the famous Necker cube? When it flips for you, to what extent is that a qualitative event, and to what extent is it cognitive?

It’s not an illusion, really. There is no sense when the cube flips for you that anything in your field of vision changed. Even a child can see with complete confidence that the lines on the paper did not change at all, but something in the mind did. This is in contrast to illusions in which, say, a straight line appears bent, and you are actually deceived. So the “raw experience” of black lines on white paper did not change, and there is not even a subjective sense that they did, but something changed.

The thing that changed has something about it that seems easy-problemish, in that it is about a cognitive inference, a second-order interpretation of the actual lines. Nevertheless, it is visceral, and immediately manifest, as much as the redness of red. Your “cognitive” interpretation of the cube (i.e. whether it sticks out down to the left or up to the right) has its own qualitative essence that outruns the simple pattern of black lines that you actually see. You might say that there is cognitive penetration of our experiences, but you could just as accurately say there is experiential penetration of our cognitive inferences. The classic duck/rabbit image is similar. You can’t merely see; you always see as. What is it like to see the word “cat”? Wouldn’t your what-it-is-likeness be different if you couldn’t read English, or any language that used the Roman alphabet? Your cognitive parsing of your visual field is inseparable from the phenomenology of vision.

The Qualia of Thought

What is it like to have a train of thought at all? How do you know you think? What is it like to prove a theorem? What is it like to compose a poem? In particular, how do you know you have done so? Do you see it written in your head? If so, in what font? Do you hear it spoken? If so, in whose voice? You may be able to answer the font/voice questions, but only upon reflection. When pressed, you come up with an answer, but up to that point you simply perceived the poem in some terms whose qualitative aspects do not fit into the ordinary seeing/hearing categories.

Among people who use the word “qualia” (whether they see it as a deep mystery like us qualophiles, or they take a more deflationary stance toward it, like the physicalists), there is a tendency to characterize qualia as exclusively sensory, and to think that any “qualia of thought” are qualitative only by inheritance. That is, we actually “hear” our thoughts in a particular auditory voice, or see things in our minds’ eye. I, for one, don’t think in anyone’s voice. Moreover, any qualia of thought is not just tagging along in the form of certain charged emotional states that accompany certain kinds of thoughts. All conscious thought is qualitative. The qualia is right there, baked into the thoughts themselves, as such. “Purely” “cognitive” “content” is itself qualitative, not just the font it is written in, or the voice it assumes when it is spoken, or the hope or the fear that we attach to it.

Anything we experience directly, whether it is the kind of thing we usually associate with sensation and emotion or with dry reasoning and remembering, is qualitative: a song, a building, a memory, or a friend. By definition, all I ever experience is qualia. Even when I recall the driest, most seemingly qualia-free fact, there is still a palpable what-it-is-like to do so. To the extent that our cognition is manifest before us in the mind in the form of something grasped all at once, whether in the form of something which is obviously perceptual or something more abstract, it is qualitative. How do you know you are thinking if you in no way express your thought physically (writing or speaking it)? A thought in your mind is simply, ineffably, manifestly before you, as a unitary whole, the object of experience as much as a red tomato is.

That we are aware of our thoughts at all in the way we are is no less spooky and mysterious than our seeing red. If you were a philosopher who was blind since birth, the “what is it like to see red?” argument for the existence of qualia would not have the same impact that it does on a sighted person. If you were also deaf, neither would “what is it like to hear middle C on a piano?” If you were an amnesiac in a sensory deprivation tank, would you have any reason to worry about these mysterious qualia that philosophers think about so much? You would, simply by virtue of noticing that you had a train of thought at all.

“What is it like to see red” and “what is it like to hear middle C on a piano” vividly illustrate the point of the Hard Problem to someone approaching these topics for the first time, but it is a mistake to stop at these. The redness of red is the gateway drug. Just because the existence of qualia is most starkly highlighted by giving examples that are non-structured and purely sensory, it is wrong to think that the mystery they point to is confined to the non-structured and purely sensory.

Even my fellow qualophiles are often too quick to accept the qualia/cognition distinction, however. The paradigmatic examples of qualia are good for convincing people that we don’t yet have a solid basis for understanding everything that goes on in our heads. It is tempting, however, to think that we are at least on our way to having a basis for understanding what is going on in our heads when we think. My point is that we don’t have a good basis for understanding that either.

I understand that this is a naked appeal to my readers’ intuitions. We have already crossed that Rubicon back in the first chapter with the introduction of the Hard Problem, qualia, and the redness of red. If you reject all of that, okay, I guess, but if not, if you accept the Hard Problem as real, upon reflection you must accept this as well. If you think there is anything deeply mysterious about the redness of red, you should be just as troubled by the thoughtiness of thought.

Just as qualia are not just the alphabet in which we write our thoughts, neither are they merely the raw material that is fed into our cognitive machinery by our senses. The qualia are still there in the experience as a whole after it has been parsed, interpreted, and filtered. Qualia run all the way down to the bottom of my mental processing, but all the way up to the top as well. We are not, to steal an image from David Chalmers, a cognitive machine bolted onto a qualitative base. Nor, as Daniel Dennett says (derisively), is qualitative consciousness a “magic spray” applied to the surface of otherwise “purely” cognitive thought. Each moment of consciousness is its own unique quale; new qualia are constantly being generated in our minds.

There are qualitative experiences that accompany, or even constitute, cognitively complex situations, but which are nevertheless no more reducible to “mere” information processing than seeing red is. Once, looking down from the rim of the Grand Canyon, I saw a hawk far below me but still quite high above the canyon floor, soaring in large, lazy circles. I was hit with a visceral sense of sheer volume—there is no other way to describe it. I felt the size of that canyon in three dimensions, or at least I had the distinct sense of feeling it, which for our purposes is the same thing. This was definitely something I felt, above and beyond my cognitively perceiving and comprehending intellectually the scene before me. At the same time, the feeling is one that is not a byproduct or reshuffling of sense data. After all, as a single human being, I only occupy a certain small amount of space, and can have no direct sensual experience of a volume of space on the order of that of the Grand Canyon. Had I not experienced this feeling, I still would have seen the canyon and the hawk, and described both to friends back home. The feeling is ineffable—there is no way to convey it other than to get you to imagine the same scene and hope that the image in your mind engenders the same sensation in you that the actual scene did in me.

Nevertheless, the feeling that the scene engendered in me only happened because of my parsing the scene cognitively, interpreting the visual sensations that my retinas received, and understanding what I was looking at as I gazed out over the safety railing. The overall qualitative tone of a given situation depends crucially on our cognitive, symbolic interpretation of what is going on in that situation. Further, the individual elements of a scene before us have qualia of their own apart from the quale of the whole scene. For example, there may be a red apple on a table in a room before me, and the image of the apple in my mind may have the “red” quale, even though it is part of and contributes to the overall quale I am experiencing of the entire room at that particular moment. The impression of the whole room, however indistinct this impression may be at the edges, is what it is, all-at-once, a quale. There is a whole-room-including-the-apple quale, and that incorporates a redness-of-red quale. The purported primitive, unstructured, sensory qualia somehow are still there, immediately, in the parsed, cognitively saturated qualia.

There are some entire types of qualia, moreover, that are inherently inseparable from their “cognitive” interpretation, experiential phenomena that are especially resistant to attempts to divide them into pure seeing and seeing-as. In particular, as V. S. Ramachandran and Diane Rogers-Ramachandran pointed out (2009), we have stereo vision. When we look at objects near us with both eyes, we see depth. This is especially vivid when the phenomenon shows up where we don’t expect it, as with View-Masters, or lenticular photos (those images with the plastic ridges on them that are sometimes sold as bookmarks, or that used to come free inside Cracker Jack boxes), or 3D movies. This effect is, to my satisfaction, unquestionably a quale. It is visceral. It is basic. You could not explain it to someone who did not experience it.

At the same time, it is obviously an example of seeing-as, part of your cognitive parsing of a scene before you. One might possibly imagine some creature seeing red without any seeing-as, unable to interpret the redness conceptually in any way, but it is impossible to imagine seeing depth in the 3D way we do without understanding depth, without thereby automatically deriving information from that. To experience depth is to understand depth, and to infer something factual about what you are looking at, to model the scene in some conceptual, cognitively rich way. Stereoscopic vision is our Promontory Point, where the Hard and easy problems collide. It is an entire distinct sense modality, but one that is inextricably bound up in our informational processing of the world.

Naive, or Pure Experience

What we know informs what we experience. I take it as pretty much self-evident that it is almost impossible to have a “pure” experience, stripped of any of the concepts we apply to that experience. Everything we experience is saturated with what we know, or think we know, what we expect, what we assume, etc. In terms of my actual direct experience of a visual field, I don’t have a raw bitmap: I have a scene, with stuff in it, and all that stuff has certain characteristics. I see this blob as a hydrant, that one as a cloud, this splotch as the Sun, that one as an object that I could touch, and that will probably persist through time. However experiences happen in minds, they are probably the result of lots of feedback loops at lots of different levels, all laden with associations and learned inferences, all stuff we might call cognition. There is no such thing as pure seeing, separated out from any seeing-as.

Through an act of willful intelligence, I could decide to concentrate only on those things in a scene before me that begin with the letters M, N, and G. Alternatively, I could choose to pay special attention to those things made of metal. In the same way, through willful, intelligent effort, I can try to distill some “pure experience” from the scene, and come up with something like a raw bitmap, perhaps for the purpose of painting a picture on a canvas of the scene.

Even in the case of our old friend, the redness of the apple, our immediate experience is that of a red apple, not some free-floating redness of red. It is by an effort of willful abstraction that we distill the image of the apple into this purported redness quale, distinct from any cognitive parsing of the scene. But even if this effort could possibly ever be 100% successful, this is further processing, more cognition, not less. The actual immediate qualitative conscious experience is that of an apple, sitting on a plate on a countertop, all mashed up with whatever thoughts we have about the apple, or apples in general, trailing off at the edges in a penumbra of memories and associations. The whole all-at-once thing is a quale. For this reason, it is a bit backward to think that I am starting with my cognition-soaked experience and working to get back to the “raw” experience, because that presumes there was originally such a thing to get back to.

This represents something of an expansion beyond what is commonly meant by “qualia” in the literature. If we are to carve nature at its joints, we should not stop at this intellectualized, abstracted redness of red. The whole experience is a quale, and any attempts on our part after the fact to decompose an experience into components is additive, not subtractive. You haven’t reduced anything that way. If my readers get nothing out of this book other than a somewhat more expansive notion of “qualia,” I will take that as a win.

Our thoughts and experiences are not the mindless clattering of “cognitive” machinery all the way down, but rather qualia all the way up. In some way, our ineffable qualia are interwoven with our judgments, conclusions, assumptions, and thoughts. It is a step in exactly the wrong direction to conclude from this that knowledge and concepts can take full responsibility for experience, and that knowledge and concepts are among Chalmers’s “easy” problems, solvable within the framework of reductive materialism. This step entails discarding qualia altogether, and concluding that experience is cognition all the way down. Nevertheless, this is the step, more or less, that Daniel Dennett takes.

Dennett on “Direct” Perception vs. Judgment

Daniel Dennett (1991) makes a great deal of the difficulty of distinguishing clearly between experiencing something as such-and-such, and judging it to be such-and-such. In response to an imaginary qualophile, Dennett says (p. 364):

You seem to think there’s a difference between thinking (judging, deciding, being of the heartfelt opinion that) something seems pink to you and something really seeming pink to you [emphasis original]. But there is no difference. There is no such phenomenon as really seeming—over and above the phenomenon of judging in one way or another that something is the case.

In Consciousness Explained, Dennett gives many examples that serve to undermine our faith that we really do experience what we think we experience, and there are many others that are not in his book. That said, I can’t help but smile at the fact that even he used the qualitatively loaded term “heartfelt” in the way he did in the quote above—seems like begging the question a bit given the argument he is making.

Dennett says to imagine that you enter a room with pop art wallpaper; specifically, a repeating pattern of portraits of Marilyn Monroe. Now, we only have even reasonably high-resolution vision in our fovea, the portion of our field of vision directly in front. The fovea is surprisingly narrow. We compensate with saccades—unnoticeably quick eye movements. Even with the help of these saccades, however, Dennett says, we could not possibly actually see all the details of all the Marilyns in the room in the time it takes us to form the certain impression of being in a room with hundreds of perfectly crisp, distinct portraits of Marilyn. I’ll let Dennett himself take it from here (pp. 354–355):

Now, is it possible that the brain takes one of its high-resolution foveal views of Marilyn and reproduces it, as if by photocopying, across an internal mapping of the expanse of wall? That is the only way the high-resolution details you used to identify Marilyn could “get into the background” at all, since parafoveal vision is not sharp enough to provide it by itself. I suppose it is possible in principle, but the brain almost certainly does not go to the trouble of doing that filling in! Having identified a single Marilyn, and having received no information to the effect that the other blobs are not Marilyns, it jumps to the conclusion that the rest are Marilyns, and labels the whole region “more Marilyns” without any further rendering of Marilyn at all.

Of course it does not seem that way to you. It seems to you as if you are actually seeing hundreds of identical Marilyns. And in one sense you are: there are, indeed, hundreds of identical Marilyns out there on the wall, and you’re seeing them. What is not the case, however, is that there are hundreds of identical Marilyns represented in your brain. Your brain just somehow represents that there are hundreds of identical Marilyns, and no matter how vivid your impression is that you see all that detail, the detail is in the world, not in your head. And no figment [Dennett’s term for the metaphorical “paint” used to depict scenes in his Cartesian Theater—figmentary pigment] gets used up in rendering the seeming, for the seeming isn’t rendered at all, not even as a bit-map.

The point here is that while we may think we see the Marilyns on the wall, and we may think that we have a qualitative experience to that effect (just like our qualitative experience of seeing red), this is almost certainly not the case. Instead, what is happening is that we have inferred, or judged, that there are Marilyns all over the wall, and we have a very definite, certain feeling that we actually see these Marilyns. Sometimes we think we directly experience things that are right in front of our faces, but really we just conclude that we have experienced them. Our inability to tell the difference is intended to make qualophiles like myself uneasy.

Dennett also discusses the blind spot in our visual field. There are simple experiments that demonstrate that a surprisingly large chunk of what we normally think of as our field of vision is not actually part of our field of vision at all. We simply cannot see with the part of our retina that is missing because of where the optic nerve leaves the eyeball. The natural, naive question is: why don’t I notice the blind spot? The equally natural, and equally naive explanation is that the brain compensates by “filling in” the blind spot, guessing or remembering what should be seen in that region of the visual field, and painting (applying more figment) that pattern or color on the stage set in the Cartesian Theater.

Dennett is quite emphatic that nothing of the sort happens. There is no Cartesian Theater, so no filling in is necessary. There is no such thing as seeing directly, there is only concluding—so once you conclude (or guess, or remember) what should be in the blind spot, you are done. There is no inner visual field, so there is no need for inner paint (figment), or inner bit maps. We do not notice the blindness because “since the brain has no precedent of getting information from that gap of the retina, it has not developed any epistemically hungry agencies demanding to be fed from that region.”

I think I am being fair to Dennett to characterize his basic claim as follows: we think that our direct experience is mysterious, but often it can be shown pretty straightforwardly that when you think you are directly experiencing something, really you are just holding on to one end of an inferential string, the other end of which you presume to be tied to this mysterious experiencing. Given this common and easily demonstrated confusion, it is most likely that all purported “direct experience” is like this, that all we have is a handful of strings. We never directly experience anything; we just judge ourselves to have done so. We never see; we only think we see. Even the redness of red.

Materialists like Daniel Dennett often use optical illusions as examples. You thought you saw one thing, but it actually turned out to be another! Or even, your judgment about your perception itself turned out to be wrong, and if your judgment about your “direct” perception is fallible, well, that’s the whole game, right? We certainly should not make sweeping metaphysical pronouncements based on something that could just be wrong. Perhaps not, but the purported wrongness is a minor detail in these cases. Even the “wrong” perception proves the larger point. We can be mistaken in our judgments about our perceptions, but we cannot be mistaken about having perceptions at all. The circle of direct experience may be smaller than we usually think, or it may have less distinct boundaries, but we cannot plausibly shrink it to a point, or out of existence altogether.

It may be impossible to draw a clear distinction between experience and judgment, but this is because judgment is itself a sort of structured experience. There is no naive experience: our judgments are part and parcel of our perceptions. Nevertheless, it is interesting that Dennett never clearly and simply defines “judgment.” Computers do not know, judge, believe, or think anything, any more than the display over the elevator doors knows that the elevator is on the 15th floor. All they do is push electrons around. Even calling some electrons 0 and others 1 is projection on our part, a sort of anthropomorphism. It seems as though I see all the Marilyns; Dennett says no, I merely judge that I see them. He is right to force us to ask ourselves how much we really know about the difference. He is wrong to think that the answer makes either one of them less mysterious, or more amenable to a reductive, materialist explanation.

It is kind of like the difference between having a map showing a place you need to drive to, and having a GPS feeding you a list of directions to that place. You can follow the directions, turning where they say to turn, without ever forming any overall conception of where you are or where you are going. The GPS can even tell you how to get back on track if you make a wrong turn. You can simply follow the directions, and never “put it all together” into any bird’s eye, directional sense of where you are.

Could it not be the case that even when we do have a sense of where we are—like in the middle of our home town—that sense is an illusion, and all we really have is a really good set of directions for how to get to any place we might need to go? When it comes right down to it, is there any real difference between “directly” perceiving something in all its detail on the one hand, and having on-demand answers to any questions you might pose about that thing on the other? Could it be the case that we think that we have an immediate, all-at-once conception or perception of something, but all we really have is an algorithmic process that is capable of answering questions about that something really quickly, a just-in-time reality generator?

If I think I have a conception of something—say, a soldering iron—could it turn out that really there is nothing but an algorithm, a cognitive module in my head with specific answers to any question I could have about the soldering iron? At any point, in any situation, the algorithmic module would produce the correct response to any question about the soldering iron in that situation. How to use it, what it feels like, its dangers, its potential misuse, its utility for scratching my name with its tip into the enamel paint on my refrigerator. Such a module would serve as a just-in-time reality generator with regard to any experience I might have involving the soldering iron. It would consist of a bundle of expectations of sensory inputs and appropriate motor outputs regarding the soldering iron.

To use computer terminology, as long as the soldering iron algorithmic module presented the correct application programming interface (API) to the rest of the mind, isn’t it possible that the mind is “fooled” into thinking that it has a qualitative idea of the soldering iron, when all it really has is a long list of instructions mapping input to output? Is there really any difference between the two ways of characterizing our cognitions regarding soldering irons? No, it is not possible, and yes, there is a difference. When I see the soldering iron, I really do see it. The sense I have of that, even with its tendrils of inference and association extending into the shadows, is itself the explanandum here, just as the redness of red was in Chapter 1. What is the sense that I see all the Marilyns if not itself a quale?

The difficulty with cleanly distinguishing between “directly” perceiving something and merely judging it to be a certain way (while having specialized modules for answering questions about it) is not limited to visual perception or perception at all, in the usual narrow sense. Nor is it limited to perception of the outside world. The same kinds of ambiguity exist with regard to our understanding of our own minds. I believe the sun will rise tomorrow. Do I really hold this single belief, or is it just a huge bundle of expectations and algorithms, each pertaining to specific situations or types of situations that I might find myself in, in which I might be called upon to deploy this purported belief?

Any of the unitary things we naturally posit in our minds (models, images, memories, beliefs) could have some component at least of such a bundle of algorithms, or agents. For any such thing, what is its API to the rest of the system, really? How much can we really say about the underlying implementation that instantiates that API? Maybe I just infer somehow that I have a belief that the sun will rise tomorrow, but that “belief” is not nearly the short little statement written down somewhere that it seems to be. The articulation of the belief could, as Dennett suggests of all of our articulations, be the result of some kind of consensus hammered out by lots of demons or agents. Nevertheless, the sense that I have such a belief is real, and unitary, even if some kind of computational mechanism that contributes in some way to that sense is not. Could I really have that sense of believing the sun will rise tomorrow without actually holding that belief?

Frankly, I don’t know right now what a belief is, or what a judgment is, when it comes down to it. It may, however, not be enough to characterize them only in terms of the functional role they play in our cognitive architecture, which is to say, in terms of the API they present to the rest of the system, while remaining implementation-agnostic. At the very least, anyone who wants to dismiss qualia as “merely” complexes of judgments or beliefs must make the positive case that judgments and beliefs can and should be characterized entirely in non-qualitative terms themselves.

Naive, or Pure Cognition

Many philosophers agree that, in minds, qualitative consciousness and cognition are closely related, if not two ways of seeing the same thing, but make the mistake of concluding that qualia must therefore be merely information processing, which we think we understand pretty well. “Information” is a terribly impoverished word to describe the stuff we play with in our minds, even though much of what is in our minds may be seen as information, or as carrying information. Shoe-horning mind-stuff into the terms of information theory and information processing is a homomorphism, a lossy projection. There are no easy problems in the easy vs. Hard Problem sense. The way the mind processes information has a lot more in common with the way the mind sees red than it has with the way a computer processes information.

Once again, the computer beguiles us. Of course, we created it in our own image, so it is no surprise that it ends up being an idealized version of our own intuitions of how our minds work. We understand computers down to the molecular level; there are no mysteries at all in computation. And clearly, in some sense at least, computers know things, and they represent things. I can get some software that will allow me to map my entire house on the computer, to facilitate some home improvement projects I have in mind. And lo! my computer represents my couch, and seems to understand a lot about its physical characteristics, and it does so completely mechanically. We can scrutinize what it is doing to achieve that understanding all the way down to the logic gate level and beyond. We are thus confident that we know exactly what is going on when we speak of knowledge, representation, information processing, and the like. There is nothing mysterious here, at least in the mechanics of what is going on. It is a simple step from there to imagine that the brain, for all its neural complexity, is (just) a computer in all the relevant ways, and we only need to figure out its implementation details.

Just because we can design devices that simulate a lot of the functions of cognition, this does not mean that these simulations do it the way we do it, any more than a computer connected to a video camera correctly identifying a red apple sees red. In terms of our understanding how minds do what they do, I’m afraid the easy problems are hard too.

My zombie twin identifies the apple as red, and exclaims about its redness the same way I do, but by hypothesis it does not really see the redness the way I do. It “sees” but does not see. You may not agree with this, but I hope at this point you at least understand what I mean by that. By the same token, my zombie twin “plans” summer vacations, “worries” about its taxes, and “believes” the sun will come up tomorrow, but to say that, just because its internal causal dynamics are identical to my own internal causal dynamics, it really plans, worries, and believes is to make a big and, I think, wrong statement about the nature of planning, worrying, and believing. At best, you are using those terms loosely, in the same way we say a magnet knows about a nearby piece of iron.

We do not study cave paintings as clinically accurate diagrams to learn about the human and animal physiology depicted therein. We study them to learn how their ancient creators saw themselves and their world, to get inside their heads. The real insights into the mind to be gained from computers come from considering that this, this particular machine, is how we chose to idealize our own minds.

I can write “frozen peas” on a grocery list, and thereby put (mechanical) ink on (mechanical) paper. Later, when I pull out the list at the store, and it reminds me to put frozen peas in the cart, this physical artifact interacts with photons in a mechanical way. The photons then impinge upon my sensory system, and thus, in turn, my mind. So the paper and ink system represents frozen peas; it knew about them. Of course, most computers we use today are a bit more complex than the paper grocery list, but the essence is the same—there is the same level of knowledge, representation, information processing, etc., going on in each. We can say that in a sense, the list really does know about the frozen peas, but not in a way that necessarily gives us any insight at all into how we know about peas.

There is no “pure” cognition in the mind, at least none that we are directly aware of. Over a century ago, philosophers did not separate cognition and qualia the way they do now. It was only in the early part of the 20th century, in the ascendance of behaviorism and the advent of information theory and theory of computation that we Anglophone philosophers started thinking that we were beginning to get a handle on “cognition,” even if this qualia stuff still presented some problems. When some thinkers felt forced to acknowledge qualia, they grudgingly pushed cognition over a bit to allow qualia some space next to it in their conception of the mind, so the two could coexist; now they wonder how the two interact.

The peaceful coexistence of cognition and qualia is an uneasy truce. Qualia cannot be safely quarantined in the “sensation module,” feeding informational inputs into some classically cognitive machine. We must radically recast our notions of cognition to allow for the possibility that cognition is qualia is cognition. Qualia are not just some magic spray that coats our otherwise functional machinery, or some kind of mood that washes over our minds. Qualia are what our minds are made of, the girders and pistons as well as the paint. This is the bullet I am biting in this chapter, that of expanding the notion of qualia beyond the paradigmatic unstructured sensory experiences, to include lots of other phenomena as well, many that are not sensory, and quite structured indeed.


Doesn’t It All Just Come down to Information?

One man’s algorithm is another man’s data.

What Even Is Information, Anyway?

Information is one of the great buzz words of the last several generations. The term has been in use in the English language for centuries, but it started to be used in its present technical sense in 1948, when Claude Shannon, a brilliant communications engineer working for the phone company, published “A Mathematical Theory of Communication,” ushering in the field of inquiry now known as information theory. He formalized the use of the term, and made it mathematically quantifiable. He thought of information as sequences of bits, or ones and zeros.

Shannon was not a philosopher; he was an engineer. He mathematicized information so that he could calculate, for example, that a communications channel capable of transmitting X bits per second with an error rate of up to Y bits per 1000 could be used to transmit Z bits per second error-free (where Z is somewhat smaller than X), given some sort of transformation of the information on either end of the communications channel. He was concerned with noise on the wire. He was concerned with characterizing the “information density” in a given stream of bits so that by compressing the stream (i.e. increasing the information density) one could effectively transmit the same amount of information using fewer bits and therefore less bandwidth on the communications channel, thereby saving the phone company money. Essentially, Shannon was interested in very practical, meat-and-potatoes sorts of questions. Others, however, have not been so conservative. Information theory has inspired many philosophers to make extravagant claims, and information has become one of the most popular bases in the reductionist’s toolkit. That is, just about everything at one time or another has been argued to be really just information, or information processing.

Of particular interest here, of course, are minds and consciousness. Indeed, the entire cognitive science program is predicated on the notion that the brain is (just) a complicated information processor—that not only can it be seen in terms of information processing, but seeing it in these terms (or “higher-level” terms grounded in such terms) captures what is interesting about the brain in its entirety. A consequence is that any similarly configured information processor of equal capacity would manifest a mind in every sense that the brain itself manifests one. These sound like large and sweeping claims, but we cannot even know whether they are or not (let alone whether or not they are true) until we nail down what is meant by “information” and “information processing.”

In what sense does an information processor actually process information? How does it manipulate symbols? In spite of the well-developed field of information theory, it is devilishly hard to find anyone who commits an actual definition of the term “information” to print. While qualophiles may not have answered the corresponding questions for the term “qualia,” they acknowledge at least that there is work to be done along these lines. People on both sides of the Hard Problem debate, however, too easily assume that we know what we are talking about when we speak of information and information processing. Information is more difficult to pin down than is generally accepted, and there are very different things that are meant by the term depending on the context.

“Information” is a perfectly fine English word, and it has been in use for a long time. For all I know, Shakespeare may have used it. Everyone has a rough and ready, colloquial sense of what it means, and people use the word to communicate with each other every day. It also has this highly technical, bits-and-bytes-on-a-wire meaning. Mischief and confusion result from this mismatch, so if we are going to define anything in terms of information, we’d better be clear about what we mean, or at least we should have some distinct lines we can draw between information and not-information.

Information Is a Platonic Abstraction

There are molecules of ink on a page made of more molecules; there are perturbations in a physical electrical field on a metal wire; there are photons of light which propagate through an optic fiber. When I look inside a computer, I see voltage levels, and diodes which behave differently when subjected to different voltage levels. All these things (or collections or patterns thereof) may be seen as information, but the key phrase there is “may be seen.”

Information theory is a branch of mathematics, and bits (0s and 1s), like lines and points in Euclidean geometry, don’t really exist, at least not out there in the real world. They are Platonic abstractions. We may profitably see things that are really there (like voltage levels) as information, and make generalizations, and hence predictions about those voltage levels based on our analysis, but the specific predictions we come up with will never be anything that we could not, in principle have derived from a sufficiently detailed knowledge of the physical system alone, without reference to any notion of “information.”

Information is an abstraction, and abstractions, to a physicalist, must be cashed out in terms of the nuts and bolts that make up the actual physical universe. In practice, it may be very difficult for us to make useful predictions about an information-processing system at the level of raw physics, but the universe itself has all it needs to clank along, one moment to the next, without our notions or theories of “information.” Put differently, once God had established all the physical facts of the universe (i.e. the physical laws and initial conditions), He did not have to do any additional work to determine the facts about information processing. Everything the universe needed with regard to information was already baked in.

Information is always carried, or manifested, by something else. More pointedly, information always just is something else. By itself, information doesn’t do anything. There is always something else doing the work, and that something else would do that work whether or not we think of it as informational. It is not merely the case that the information needs a substrate to instantiate it—the information just is the physical substrate, just as heat just is the mean kinetic energy of a collection of molecules. A system may be seen as informational, and we may thereby derive interesting and important conclusions, but these conclusions will themselves be may-be-seen-as conclusions, couched in terms of the abstractions of information theory.

But when people invoke the term “information” to describe some physical stuff interacting with other physical stuff, they are not usually talking about the stuff itself as such. Information is necessarily abstract. It is not the voltage levels or the ink, but the pattern of voltage levels or ink. As Rosenberg has pointed out (1998), the informational content of anything, whether ink on a page or electrical impulses on a wire, is a bare schema, or a pattern of bare differences. That is to say, the differences by virtue of which something is considered to be information are differences that are circularly defined in terms of each other. What is 0? It is not 1. What is 1? It is not 0. And this is all you ever need to know, all there is to know, about 0 and 1.

0 and 1 can be manifested, or carried, by any medium capable of assuming two distinguishable states (voltage levels on a wire, water pressures in a hydraulic system, wavelengths of light on an optic fiber). This substrate must have a nature of its own that outruns the simple criterion of distinguishability of states necessary to carry, represent, or manifest the abstract 0s and 1s of the purported information itself. One of information’s distinguishing characteristics is that it is independent of its particular carrier. Information is arbitrarily transposable, or, to use a popular term, it is multiply realizable.

As I (and others) have argued, qualia are not arbitrarily transposable. Qualia are not themselves information, although they can carry information. Qualia are not a pattern of anything else, but the stuff of which patterns can be made, the substrate whose nature outruns the criterion of (mere) distinguishability. Redness is a qualitative essence and cannot survive any transformation or translation into anything but redness. Some information could turn out to be conveyed by qualia, but qualia can’t ever turn out to be (just) information.

Information Represents

Understanding, then, that when we speak of “information”, we are not speaking about something real in itself, but rather as some good old physical thing that may be seen as carrying or manifesting information, what makes some physical things count as information and others not? It might be tempting at this point to turn from the strictly syntactic notions of information theory to a more semantic characterization of information. We might say that information represents something.

If we go there, however, we have left Claude Shannon behind. He and the phone company don’t care what bits represent, or whether they represent anything at all. We are no longer in the quantitative, technical realm of bits, bytes, formulas, and information theory, and we have entered the squishier world of connotation, context, and intuition.

Back in the old days, I had an answering machine on my home telephone. When I didn’t pick up the phone, it told whoever was calling that I wasn’t home right now. It was a classic, purely causal, beer-can-falling-off-a-fence-post physical system. To what extent was it really, truly, representing me as not being home right now? How much more internal state would it have to have, “modeling” the world in some special way, perhaps processing this model in an “integrated” way, before we would say that yes, it really was representing me as not being home, in any way that was relevant to these discussions?

What does it mean for information to represent (without circular reference to information)? What do we mean when we use the term “represent”? What is the core intuition or experience that leads us to use the term the way we do? Does the light from distant stars, striking an earthly telescope, constitute information that represents the stars? Do all effects represent their causes, simply by virtue of the fact that someone might potentially be able to infer the cause (or something about the cause) just by observing the effect?

We might then start by saying that thing1 represents thing2 if thing1 is caused by thing2, or if thing1 varies in regular, lawlike ways as a function of variations in thing2. It is sometimes said that information is “a difference that makes a difference.” But this is too broad to be any use at all. Since, from the time of the Big Bang, each particle in the universe has some influence on every other particle (from the non-zero gravitational influence that any two objects of non-zero mass exert upon each other, if no other), everything is caught up in the causal mesh. Everything behaves just the way it does as a function of everything else (at least, everything else in its “light cone”, if you want to be physically accurate). If information is anything which is caused by other things in lawlike, regular ways, then everything is information. In fact, everything is information about everything else. If everything is information about everything, then the term is nearly useless, and should be replaced, in philosophical debates, with the more honest term “stuff.” And “information processing” could reasonably be replaced with the expression “stuff doing stuff under the influence of other stuff.”

Descriptive vs. Prescriptive Information

A great deal is made of the fact that information represents, but this descriptive, representative sense is only half of the informational story. There is a whole other aspect of information that plays a huge role in our lives and in our theories. Information comes in two flavors: (1) prescriptive (“pick that up”) and (2) descriptive (“the museum is open today”). Algorithms are prescriptive, data is descriptive. The algorithm operates on data. The opcodes (short for “operation codes”) that comprise a computer program at the lowest level are prescriptive information (they tell the CPU what to do during a given tick of the computer’s internal clock), whereas the data upon which the program operates (whether that data comes from inside the computer’s memory or outside it, through an input device) constitutes descriptive information. Descriptive information represents (or misrepresents) something, while prescriptive information tells you to do something. If a fragment of a computer program says, “If x is greater than 43, open the pod bay doors,” the fragment itself is prescriptive, while the number being examined, the x, is descriptive data. Those opcodes are purely causal, and themselves comprise absolutely everything a computer ever does. Their prescriptive nature is as blunt as that of a baseball hitting an antique vase. They just do.

In everyday conversation, we tend to think of information as primarily descriptive: it sits there, and you hold it before you and regard it: “Oh, so Bismarck is the capital of North Dakota. How interesting.” But algorithms are information too (“Go three blocks, turn left at the light, pull into the Krispy Kreme drive-through, and order a dozen hot glazed doughnuts.”). As far as information theory is concerned, Shannon’s laws, etc., don’t care at all whether the information is taken as descriptive or prescriptive by the eventual receiver of the information. Any string of 0s and 1s has the same bandwidth requirements on the wire and is quantified exactly the same way whether regarded as descriptive or prescriptive, as data or algorithm.

If you find a computer file full of binary data, and you have no way of telling what the data was used for, you cannot tell whether the file constitutes descriptive or prescriptive information. There is no fact of the matter, either, if you just consider the computer’s disk itself as a physical or even an informational artifact. It’s just a bunch of 1s and 0s. For you to make the prescriptive/descriptive distinction, you must know what the file was intended for, and in particular, you must know a lot about the system that was supposed to read it and make use of it. Only by taking the receiver of the information into account, and looking closely at how it processes the information, can we determine whether the file constitutes data or algorithm. Does the receiving system open the file and treat it as salary records, or does it load up the file and run it as a program? Indeed, one system could treat it as a program, and another could treat it as data, compressing it perhaps, and sending it as an attachment in an email message. The choice of whether a given piece of information is prescriptive or descriptive depends on how you look at it.

Example Using Boolean AND

Consider the AND gate. An AND gate is a very simple piece of circuitry in a computer, one of a computer’s most basic logic components. It is a device that takes two bits in and produces one bit as output. In particular, it produces a 0 if either (or both) of its input bits is 0, and produces a 1 if and only if both input bits are 1. That is to say, it produces a 1 as output if and only if input 1 AND input 2 are 1. Note that the operation of the AND gate is symmetrical: it does not treat one input bit as different from the other: 1 AND 0 gives the same result (0) as 0 AND 1. Another way of saying this is that the AND operation obeys the commutative law. The operation of the AND gate is summarized in the following truth table:

input 1input 2input 1 AND input 2

But now let’s arbitrarily designate input 1 as the “control” bit and input 2 as the “data” input. Note that when we “enable” the control input (i.e. we make it 1), the output of the whole AND gate is whatever the data input is. That is, as long as the control input is 1, the data input gets passed through the gate unchanged, and the AND gate is effectively transparent. If the data input is 0, then the AND gate produces a 0. If the data input is a 1, then the AND gate produces a 1.

When we “disable” the control input, however (i.e. we make it 0), the output of the whole AND gate is always 0, no matter what the data input is. By holding the control input 0, we turn off the transmission of the data bit. So the control input gets to decide whether to block the data input or let it though untouched. It is the gatekeeper. But (and here is the punchline) because of the symmetry of the AND gate, our choice of which input (input 1 or input 2) is the “control” and which is the “data” was completely arbitrary! The decision of which input is the prescriptive input telling the gate what to do with the descriptive input is purely a matter of perspective.

Information Pokes, Pushes, or Nudges

If we are speaking in terms of information theory, even loosely, we are in the realm not of conscious humans, but of systems. These systems can have an internal state, and they communicate using information, or signaling of some kind, traversing a communications channel. The communicating systems may exhibit behavior and/or change their own internal state based on information they receive or access. In this realm, the comfort zone of the computer scientist, the information theorist, and the physicalist, strictly speaking, there is no such thing as representative, descriptive information—all information is ultimately prescriptive. It pushes, pulls, pokes, or nudges, or it is nothing at all. Insofar as information has any effect on a receiving system or information processor at all (that is, insofar as it is informative), it makes the processor do something. The data in an MP3 is an algorithm that commands a machine to construct sound waves that make up the music.

Think of a given piece of the information as a physical thing—say, a tiny area on the surface of a computer disk that is magnetized one way or another way, indicating a 0 or a 1. If this area is to constitute information at all, it must be causally efficacious. That is, something else must do something, or not do something, or do something differently, because of the particular way that area is magnetized. For the magnetized area on the surface of the disk to be informative at all, it must make something else do something, just as a rock I throw makes a beer can fall off a fence post.

This sounds pretty prescriptive. Nothing happens by virtue of information simply being itself. At some physical level, it always comes down to the information (or more precisely, the information’s physical carrier or substrate) pushing something else around, forcing a change on some other physical thing. Moreover, any physical system that forced the same kind of state change on the part of the receiver would thereby constitute the exact same information as far as that receiver was concerned.

A computer does what it does because of an algorithm, or a program in its memory. This algorithm is prescriptive information. It consists of a series of commands (opcodes), and the computer does whatever the currently loaded command tells it to do. The computer itself (or its CPU) comprises the context in which the individual commands have meaning, or rather the background dispositions which determine what each command will make the computer do. The data that the algorithm processes may be considered descriptive information, but to the extent that the computer’s internal state changes on the basis of the data it is processing, hasn’t the data dictated the machine’s state, and thus its behavior? “If x is greater than 43, open the pod bay doors”: isn’t x here an opcode, whose value tells the computer to open the pod bay doors or not? The “data” is either not there for you at all, or it makes you do something. It is the cue ball: it knocks into other balls and sets them on an inevitable course of motion. All data are opcodes.

The prescriptive aspect of the supposedly descriptive data in a computer is obscured by the fact that the data lacks a clear, stable context in which its effects are felt, whereas the same CPU tends to do the same thing each time when given the same opcode. The effects of different data are highly dependent on the current state of the machine. Nevertheless, after the data is read, the machine’s state is different because of the specific value of the data, and the machine will behave differently as a result. The machine acts differently because of this data, just as it acts differently on the basis of different opcodes in its algorithm. There is no principled natural distinction between the information that comprises the algorithm and that which comprises the “data” on which the “algorithm” operates.


If all information is, at heart, prescriptive, then what becomes of reference, or self-reference in particular? Lots of thinkers have been very interested in self-reference for the last century or so, but what is so special about it? Is it really so mind-blowing that I can look up “dictionary” in the dictionary, or that I can write “This sentence is false” on a post-it note? If information is prescriptive or algorithmic, then all supposed cases of referential loops turn out to be causal loops like the Earth revolving around the Sun, or the short computer program “start: do some stuff; go back to start.”

A computer routine that is recursive is one that calls itself, like the factorial calculator. Recall that, for instance, 5 factorial (written 5!) is 5 × 4 × 3 × 2 × 1, or 120. The computer program to calculate that looks something like this:

factorial(input)   # Assume 'input' is a natural number!
    if (input is 1) then return 1
    else return (input * factorial(input - 1))

(The asterisk in the above pseudocode means “times,” like the × symbol in the paragraph above.) When called and handed a particular number as an input parameter, this routine calls itself with the next lower number, which also calls itself with the next lower number; then finally, when the number reaches 1, it returns a 1, and the whole thing unwinds. This routine, then, is self-referential. But as far as the computer running it is concerned, there is nothing special or mind-bending about it. It neither knows nor cares that it is calling itself rather than a long series of separate routines. At each call, it just adjusts its Program Counter register to go wherever it is told to go, pushing some stuff on the stack. One hundred different routines, or one hundred calls of the same routine—it makes no difference to the computer. In this, the computer is right.

An algorithmically implemented submodule is a deterministic, causal device. If it pushes a ping pong ball into its output tube, and the ball disappears, it’s gone. If, a moment later, a ping pong ball emerges from its input tube, it doesn’t make a bit of difference to the submodule whether that is the same ping pong ball or a different one sent from a distant submodule.

When we see a recursive computer routine, the Bertrand Russell in us kicks in, and we go: self-reference! Whoa… but the routine simply transferred control to another routine. The fact that the next routine is itself is not interesting and makes no functional difference. We have an intuition that self-reference is weird and special, but it is a mistake to suppose that a machine “acting on itself” must therefore be weird and special. We need to dig and figure out what self-reference means to us, and why it is weird and special in our case.

All Models Are Algorithms

There are theories of consciousness that regard consciousness as a product of the interaction of a system with an internal model within itself, kind of a homunculus without the infinite regress (I have already briefly discussed Metzinger (2003)). When we think we are examining our own experiences, and we run into some unanalyzable aspect of experience itself, and this strikes us as deeply mysterious, all it really means is that we are looking at our own internal avatar of ourselves and querying our own queries of it. No wonder we get tied up in knots! But let’s not get carried away and go off the metaphysical deep end. What sort of additional information does an internal model provide the larger system that it could not have derived on its own (given the external stimuli), and how does this additional information confer consciousness?

It seems that if we have a system that contains an internal model, we could optimize it a bit, and integrate the model a little more tightly into the rest of the system. Then maybe we could optimize a little more, and integrate a little more, all the while without losing any functionality. How would you know, looking at such a system, if it just didn’t have an internal model anymore, or it did but its model was distributed throughout in such a way that it was impossible to disentangle it from the rest of the system? In the latter case, what power did the notion of the internal model ever have? The problems with thinking that there is something special about self-models are similar to those that plague higher-order thought theories: once you separate out some aspect or module as special to the system as a whole (whether you call that thing a self-model or a higher-order thought), the specialness really comes from the communications channel between that module and the rest of the system, and we are right back where we started.

Internal Models as Black Boxes

Let us assume a conscious system that has a distinct model (either a model of itself, or a model of the world, or a model of the world including itself—whatever kind of model deemed necessary to confer consciousness). In good functionalist fashion, let us denote this in our schematic diagram of the whole system with a black box labeled “model.” You ask it questions, and it gives you answers. Between the “model” box and the rest of the system is a bidirectional communications channel or interface of some kind. This kind of thing is often denoted in schematic diagrams as a fat double-ended arrow (like this: ⇔) connecting the “model” box and the box or boxes representing the rest of the system. Think of it as a cable, perhaps a very fat cable, capable of carrying as much information as you like. Let us call this interface, the cable itself and the conventions we adopt for communicating over it, the API (for Application Programming Interface, a term borrowed from computers). This API may be quite complex, perhaps astronomically so, but in principle all communication between the rest of the system and the “model” box can be characterized and specified: the kinds of queries the rest of the system asks the model and the kinds of responses the model gives, and the updates from external stimuli that get fed into the model.

People who believe in these sorts of theories generally claim that the rest of the system is conscious, not the model itself. Because, by hypothesis, all communication between the (purportedly conscious) rest of the system and the model takes place over the API, the consciousness of the rest of the system comes about by virtue of the particular sequence of signals that travel over the API. As long as the model faithfully keeps up its end of the conversation that takes place over the API, the (conscious) rest of the system does not know, cannot know, and does not care how the model is implemented. It is irrelevant to the rest of the system as a whole what language the model is written in, what kinds of data structures it uses, whether it is purely algorithmic with no data structures at all except for a single state variable, or even purely table-driven in a manner similar to Ned Block’s Turing Test beater. It could well be completely canned, the computational equivalent of a prerecorded conversation played back. As far as the rest of the system is concerned, the model is a black box with an interface. Let us just think of it, then, as an algorithm, a running program.

Once you separate the model from the rest of the system conceptually, you necessarily render it possible (in principle) to specify the interface (API) between the rest of the system and the model. And once you do that, there is nothing, absolutely nothing, that can happen in the rest of the system by virtue of anything happening in the model that does not manifest itself in the form of an explicit signal sent over the API. Anything that properly implements the model’s side of the conversation over the API is exactly as good as anything else that does so as far as any property or process in the rest of the system is concerned. All that makes the model a model is the adherence to the specification of the API. The model is free, then, to deviate quite a bit from anything we might intuitively regard as a “model” of anything as long as it keeps up its side of the conversation, with absolutely no possible effect on the state of the rest of the system.

As any model-based system can be fairly characterized in this way, I have a hard time seeing what intuitive pull this class of theories has for its fans. Remember, what we are looking for is something along the lines of “blah blah blah, the model gets updated, blah blah blah, and therefore red looks red to us in exactly the way that it does.” What magic signal or sequence of signals travels over that API to make the system as a whole conscious?

In information systems as traditionally conceived, there are no models, no representations, no data. It is all algorithm. As engineers, we may find it useful to draw a line with a purple crayon and call the stuff on the left side “data” and the stuff on the right side “algorithm” or “processor,” but this is not a principled distinction. It is ad hoc, a may-be-seen-as distinction. Any theories of mind that depend on certain kinds of “models” or “representations” being operative then degenerate back into strict functionalism, since the models they speak of turn out to be just more algorithm, as if they were utility subroutines.

The Algorithmic Intuition

Where does the intuitive appeal of philosophies like representationalism come from? Part of it, I think, is the idea that the system, the processor or algorithm, can respond dynamically to the representation, the data. We have a sense that the algorithm has a certain identity, and that to the extent that it opens the door and invites data in to manipulate its own internal state, it does so under its own control. This intuition loses some of its strength when you fold the “data” into the algorithm (hardcoding the data), however. If you take the data upon which the algorithm is presumed to operate dynamically and declare it to be just part of the whole algorithm, the algorithm doesn’t seem quite so dynamic anymore.

Algorithms are deterministic. Or rather, their physical manifestations are exhaustively described by the laws of classical physics. They barrel along on steel rails of causality. If you look closely enough at them, there are no options open to them, no choices whatsoever. If I knock a beer can off a fence post with a rock, it falls to the ground. There is no way even of saying that an algorithm runs correctly or incorrectly. There is no sense in saying that an algorithm is true or false. It neither represents nor does it misrepresent. It just does. (Or rather, and importantly, whoever or whatever faithfully executes the algorithm, plus the algorithm itself, just does. The algorithm itself just sits there.)

The intuition that there is a certain plasticity inherent in algorithms, that they could do other things than what they do, is a mirage. If I don’t throw the rock, the beer can will stay on the fence post. While it may seem that an algorithm could behave differently given different data to operate on (if x equals 23, the pod bay doors stay closed), it would also behave differently if some of its subroutines were rewritten (if x equals 86, activate the espresso maker).

When people speak of algorithms manipulating representations, and look to them for the special sauce of consciousness, or anything philosophically big and fundamental, they are projecting intuitions about the mind outward into other stuff. Outside of certain limited technical contexts, the whole idea of the algorithm is a sneaky modern form of animism, an attempt to breathe life into cold, dead Shannon information, made of Newtonian physics, to make it jump up and run around, to give it some inherent motive power, while denying motive power to the “data.”

The Data Intuition

And what of our intuitions about data, as opposed to the algorithms that process it? Let’s introspect for a moment and ask what is going on in our minds when we are aware of “raw” information, stuff that we imagine just sits there for us to know or perceive. (Without going down the rabbit hole of distinguishing between (mere) belief and knowledge, I am going to use “knowledge” here as a generally appropriate term that captures what I mean by the somewhat clunky “descriptive information” in our minds.) What is that like, and what makes us think we possess “descriptive” information or knowledge?

Starting with a qualitatively loaded example, I know that fire is hot, even when I am not near a fire. What does this particular piece of information consist of in my mind? How would I describe it? I would probably say something like, if I reached my hand out into a fire, it would burn me; if I put a piece of metal in a fire, it would get hot, and it would burn me.

I also have a piece of information in my mind that Paris is the capital of France. If asked to describe this knowledge, I would say that if I got on an airplane and went to Paris, I would end up in the capital of France, and I would have a whole lot of experiences that validated that. I know the cast resin garden Buddha is hard, and I know this with certainty—this is a piece of information I possess. What does it mean that I know this? I have an immediate, palpable sense that if I were to touch it, if I were to drum my fingernails on it, if I were to rap it with my knuckles, it would feel hard.

There is a common theme here, and that theme is hypotheticals. There is a whole lot of “if, if, if” in my descriptions of my own knowledge. Some of our expectations regarding these hypotheticals are immediate and sensual, while others are complicated and a little more abstract. I know that I have a certain balance in my checking account: if I tried to buy a roller coaster for my back yard, the debit would be declined.

If/then clauses have a decidedly algorithmic, prescriptive ring. One associates them with computer programs. To resolve them, you run through the cases. You compute. Could it be, in fact, that we do not actually know in the direct sense that we think we do, for instance, that the garden Buddha is hard? We only cognitively judge ourselves to know, and have a very good system for coming up with justifications on demand? If this were true, our “knowledge” of something is really just a warm, fuzzy confidence that we know rather than what we normally think of as true, immediate, internalized descriptive information. As I pointed out above, in the antiseptic realm of information processors communicating over a channel or accessing internal memory, there is no such thing as truly descriptive information, if “descriptive information” only ever actually informs by being activated and poking, pushing, and nudging. In the case of our minds, could it be that this activation involves running the “descriptive information” through some imaginary cases and getting results?

When does complete, just-in-time predictive power and confidence of your mastery of the hypotheticals become essence? How do you know that you know, really? Even something as seemingly definitional as 2 + 2 = 4? You feel certain that you grasp the meaning, and its inherent truth, all at once, but this is an appeal to introspective intuition. As a qualophile, I’m all for appeals to introspective intuition in such cases, but qualophobes often engage in intuition-shaming, so we should, in fairness, subject this one to the same sort of scrutiny.

This take on descriptive information is analogous to what Daniel Dennett thinks about qualia. He claims that we don’t actually directly experience in the way we think we do, but we (merely) judge ourselves to experience. We actually have a really good mechanism for answering any questions immediately about our field of “experience”, and we tell ourselves cognitively that we experience “directly”. Could our own knowledge of things be that way?

No, and for the same reasons that Dennett is wrong about qualia.

What Is It Like to Know?

I can know that the Buddha is hard, and really sense my knowledge of its hypothetical hardness without actually taking the time to run through any of the imaginary scenarios of touching, drumming, rapping. I’ve already talked about how odd it is that we can have a single thought that has temporal extension or flow built right into it, and how a smeared-out process becomes a unitary thing, grasped all-at-once. In our minds, the prescriptive becomes descriptive. Process becomes thing. We see the algorithm from above, without running through the if…then… cases like mice in a maze. For us, if…then… is not a matter of execution paths, but a more holistic, from-above ifthenishness. The counterfactuals are not just our way of expressing or explaining our knowledge, but are right there, baked into the knowledge itself, and into our sense of having that knowledge. There is a what-it-is-like to know the Buddha statue is hard. I know the Buddha statue is hard with the same sort of certainty that I know that it is hard when I am actually stubbing my toe on it. I am directly acquainted with my knowledge of its hardness as a piece of descriptive information.

Moreover, I know that I know it is hard. The troublesome second-orderliness of knowledge mirrors that of qualia: seeing red seems inseparable from knowing that you see red, just as knowing that Paris is the capital of France seems inseparable from knowing that you know that Paris is the capital of France.

In fact (and if you have been following along you probably saw this coming), I’ll take it to its next logical step: knowledge is a quale. Like a lot of qualia, it is a complex all-at-once kind of quale. Interestingly, it is also a Lego-stackable quale, in that it constrains or modifies, or calls into being, other qualia. Knowledge applies itself on the fly as the situation calls for it, or seems to present an opening for such application, and incorporates all those implied hypothetical scenarios instantaneously in some way, so that they don’t actually have to play out through time in your mind. The ways in which a piece of knowledge can construct or constrain other thoughts you might (or might not) come up with is an inherent part of the knowledge itself. Pieces of knowledge seem to insert themselves and stack and self-organize as appropriate. They are active, and interactive.

Descriptive Information Is a Projection, and Weird

The intuition that an algorithm stands aloof, and regards inert data and makes choices based on it but not dictated by it, and that data and algorithm are somehow different, is an anthropomorphism. We project our own subjective, introspective experience outward. It’s not wrong or silly! It only becomes silly when we try to bleach out any trace of the source material. We do create and use descriptive representations in our minds. We experience this every moment. We, as conscious minds, have a strong sense of having a separate identity from the simulations of reality we create and tinker with in our heads. We feel that we stand back from our models, our data, our pieces of descriptive information, and regard them, and make decisions based on them.

This sense, however, isn’t quite as trustworthy as it seems. As William James said, the thoughts are the thinkers. As I have said, the apparent distinction between the self thinking and perceiving, and the stuff thought or perceived, must be something of an illusion, under pain of infinite regress of the homunculus in the Cartesian Theater. Whatever the self is, it must incorporate any “descriptive” information it is aware of into itself as part of itself, even as it thinks of itself as standing back and regarding thoughts, percepts, and bits of knowledge. The players are the audience, after all. It is this illusion, this fantasy image, that we project onto algorithms and data.

If physicalists want to deny qualia as fundamental, they should examine information too. They must give up the algorithmic intuition (doing vs. representing): that certain information does stuff and has any choice about what it does, and that certain other information doesn’t do anything but is done to. To a physicalist, all information is purely prescriptive, deterministically so. Which is fine, but “information” then becomes either weak or (philosophically) boring, and it becomes pretty hard to say that consciousness all comes down to information (or the processing thereof).

There are descriptions in the universe. They just aren’t information, in the strict, Claude Shannon, information theory sense. That is to say, information takes on its descriptive, representative aspect only when we create it and take it in all-at-once; when in our minds, it is something other than a cluster of dispositions manifested in a series of hypothesized scenarios to be played out algorithmically, and is, rather, a single thing, a partless whole. Importantly, a single partless whole that in some funny, as yet poorly understood way, incorporates those dispositions and scenarios in a qualitative, all-at-once comprehension. This ability of ours, as I have argued, is a unique, spooky, mysterious thing minds and only minds do, like seeing red. As with the redness of red, it is hard even to talk about it in precise terms, which is all the more reason to try to talk about it, being honest with ourselves about the limitations of our usual ways of talking.


Reference: Picking Out

Usage is right
Usage wins
All language is folk language
All language is slang

Philosophy of language has been quite an active field for the past century or so, and understandably there is considerable overlap between it and philosophy of mind. It is hard to talk about words, sentences, and their meanings without running up against questions about concepts, and how they are created and manipulated in the mind. Likewise, it is hard to ask how the mind works without running into questions about how it manipulates symbols and how the symbols it manipulates may affect its working in turn. We may not think entirely in words, but there seems to be a strong connection between the way we think and the way we articulate.

As we think about how, say, reference works, our approach ought to be that of an investigator reverse-engineering any complicated phenomenon. We should convince ourselves that we have some decent examples of the phenomenon in question, and then speculate about how that phenomenon comes about. Our conclusions should take the form of descriptions of the underlying mechanisms as they occur in the wild, without a lot of preconceptions about how they should work. There are no right answers about reference and meaning, except the plain truth about how they work in actual human minds. In particular, as I hope to show, we should steer clear of a beguiling scientistic Platonism as we think about such things, or at least bracket it, note it, and move on.

Extension, Intension, and Possible Worlds

Terms are about things. “Water” refers to, is about, water. “Cat” is about a cat, or cats in general. So far, so good. The stuff out there in the world that a term “picks out,” the actual cat(s) or the actual water, is called the extension of the term.

There are aspects of meaning that are not done justice by simply pointing out the extension of a term, however. Often there is, implicit in a term, not just what the term actually refers to, but how it refers to it as well. One of the most well-known examples is that of renates and cordates. Renates are creatures that have kidneys, and cordates are those with hearts. As it turns out, everything that has a kidney has a heart, and vice versa. So “renate” and “cordate” both have the same extension; they both refer to exactly the same set of actual animals. Nevertheless, it should be intuitively clear that the terms do not have exactly the same meaning. One can imagine a creature that is a renate but not a cordate, or a cordate without being a renate. The terms “renate” and “cordate” have perfectly distinct meanings, and it seems like an accident of nature that they happen to coextend.

If extension is the actual stuff that a term picks out, intension is how the term picks it out. Intension is the questions a term asks the world before it decides that some aspect or part of the world is denoted by that term or not. (If all this anthropomorphizing of terms themselves seems a little suspect to you, rest assured, I couldn’t agree more. Bear with me.)

To capture and formalize the idea of intension, philosophers have come up with possible worlds scenarios. Renates and cordates are the same creatures in our world, but there are possible worlds in which some renates are not cordates. To put it in mathematical terms, intension is a function of possible worlds to extensions in each world. That is, to nail down a term’s intension, you let your imagination range over all possible worlds, and for each possible world, you determine what the extension of the term would be in that world. When you are done, you have the original (infinite) set of all possible worlds and, for each one, the extension of the term in that world. The resulting (infinite) set of pairings completely captures the term’s intension, which comes much closer to the term’s meaning than simply specifying its extension in our world. Got that?

This talk of possible worlds has always struck me as a clunky and extravagant way to talk about why we use the terms we use the way we do. Surely when ordinary language users use a term like “renate,” infinite sets of possible worlds do not actually play any role in their mental processes. I detect a whiff of Platonism—the faith that reference is something real (albeit non-physically, or metaphysically, real), something we could have theories of, theories that could be objectively right or wrong independent of our mental processes. Be that as it may, if infinite sets of possible worlds seem a bit unwieldy, as we will see shortly when I discuss two dimensional semantics, it gets worse.

Moreover, talk of possible worlds often seems to assume that picking out the extension of a given term on a particular possible world is unambiguous. There may well be worlds in which a given term simply might not have an extension at all, but on the ones in which it does, there are generally seen to be no real problems picking it out, and there are no real problems telling which are the worlds in which the term has an extension in the first place. All that matters is the final answer: that crisp, neat mapping of possible worlds to extensions that defines the intension.

If possible worlds are interesting fodder for speculation at all, it is because of the ambiguous cases. Are terms defined absolutely, because of some inherent essence of the thing described? Or are terms (and concepts, for that matter) defined relationally, in terms of their functional interactions with other things? Was John Muir right: “When we try to pick out anything by itself, we find it hitched to everything else in the Universe.”? To the extent that we admit that our idea of what a thing is depends on its relations to other things (perhaps even, transitively, all other things), any difference a possible world exhibits from our own puts the burden of proof on someone who claims that a term is directly transferable from our world to that possible world. Could there really be Pepsi worthy of the name in a world with no Coke? Most people would say “probably,” but it gets tricky depending on context.

Who Is “Albert Einstein”?

In how many possible worlds is there an extension of the term “Albert Einstein”? What if there were a world just like our own, but the man we credit with discovering special and general relativity, and who adorns countless dorm room walls, was named Albrecht Eisenstein? What if there were a man named Albert Einstein who was raised exactly as our Einstein, in exactly the same family, with exactly the same genetics, but who made his living as a piano tuner, never entering the world of science at all? What if Albert Einstein discovered relativity, but was a blond Englishman? What if, in addition, his name was Edwin Chillingsworth? In how many of these worlds (and any of the others that we could come up with for hours and hours) can we definitely pick out the extension of the term “Albert Einstein”?

It depends on the kind of conversation we are having. Sometimes, even with proper names (the paradigmatic examples of what are called “rigid designators”), we are speaking more abstractly, sometimes less. Moreover, when we speak abstractly, or figuratively, we do not always carry out our abstraction along the same axes, abstracting away the same kinds of details as we might at other times, in different conversations.

Let us imagine, for example, that there is an alliance of advanced civilizations that calls itself the United Federation of Planets. This Federation never makes overt contact with a newly developing civilization until that civilization is on the verge of inventing warp drive, which would allow the civilization to explore the cosmos. In the midst of clandestinely monitoring an emerging civilization, a Federation captain might have a conversation with his First Mate in which he asks, “Have they had their Albert Einstein?” This might be a slightly awkward way to phrase the question, but nevertheless it would be reasonably unambiguous, and the First Mate could answer “yes” or “no,” perhaps following up with some detail as to the exact state of the civilization’s scientific development. Obviously, the captain was speaking somewhat abstractly. He did not mean to ask if the civilization had produced a wild-haired, slightly comical man born in 1879 in Ulm, Germany. If the planet being watched was populated with gelatinous green blobs that communicated through their highly developed sense of smell, and had no ears or eyes, the First Mate could still perfectly truthfully answer “yes” to the captain’s question. The captain is interested in certain of Einstein’s characteristics, but not others.

On the other hand, what if Mileva Einstein (Einstein’s first wife) found herself sucked out of our universe through a wormhole and ended up on the bridge of our Federation starship. Once it was clear that she had no hope of ever returning to her own world, she might ask, “Does this world have an Albert Einstein?” She would not take yes for an answer if the Albert Einstein being referred to was a gelatinous green blob that had discovered relativity. She might very well, however, take yes for an answer if the Albert Einstein were our piano tuner. She is also speaking abstractly, but she is abstracting along different lines than the Federation captain is. Both abstractions are perfectly valid, in their respective contexts.

So it is not enough to think that we may speak of something either abstractly or specifically. It is not even enough to see that we may speak more or less abstractly, along a continuum. In different contexts a term may be abstracted along different lines, in a continuum, holding different properties as essential. That is, we cannot even talk about speaking abstractly, even allowing it to be a matter of degree rather than admitting discrete states, unless we know who is doing the abstracting and what their interests are, what they consider essential properties of whatever it is they are talking about, and what they assume their audience will consider essential properties.

Ours is the only world we are forced to deal with, and it quickly becomes clear if someone is flat-out using a term to refer to something that other people would not use the term to refer to. But as soon as we enter the realm of possible worlds, we open the door to legitimate disagreements, for a given world, as to what constitutes the extension of a given term. Once we start hypothesizing in this way, it is often by no means obvious whether the extension of a term exists, or exactly what its extension is in a given world. There may be no way, even in principle, of answering these questions absolutely, depending on the context of the usage, and depending on who the speakers and listeners are, and what their interests in communicating are. It is these sorts of inherent ambiguities that possible worlds scenarios should get us talking about, but which most possible worlds thought experiments ignore. One of the most well known of such thought experiments is Hilary Putnam’s Twin Earth.

Putnam’s Twin Earth

In his widely cited paper,“The Meaning of “Meaning”” (1975), Hilary Putnam argues against the sort of internalist characterization of meaning that I argue for. Putnam’s most memorable example is a possible worlds scenario involving a hypothetical Twin Earth. Twin Earth is just like our Earth, perhaps even including a twin me and a twin you, with one exception: on Twin Earth, the substance that they call “water,” while drinkable, odorless, transparent, and in all other “superficial” ways identical to our water, is not made of H2O. It is instead made of some other chemical compound, that Putnam abbreviates as XYZ. The question that presents itself immediately, of course, is whether or not XYZ is really water.

Putnam flatly asserts that it is not. If water is H2O, then the extension of the term water is the set of all quantities of H2O, anywhere in the universe that they occur, and nothing else. Anyone who uses the term water in such a way that it has a different extension is simply wrong. Putnam’s main point is that, as he put it, “meaning ain’t all in the head.” My twin and I may be in identical mental states as we use the term water, but we mean different things by virtue of the fact that our respective uses of the term water have different extensions. For Putnam, the meaning of a term depends crucially on its extension.

Putnam also says that before about 1750, no one knew that water was H2O, even though it really was. If it turned out that some, but not all, “water” on Earth was really XYZ, it would thus turn out that people who had referred to quantities of XYZ as water (the pre-1750 people) were wrong all along. Putnam claims that the usage by pre-1750 speakers of the term “water” to denote XYZ would be retroactively invalidated by future scientific discoveries, even though they lived and died in a community of speakers, listeners, and readers who used the term with unanimous and unambiguous (to them) agreement as to its meaning. I find this claim downright bizarre.

Water is most likely a cluster concept—a collage of properties, memories, associations, nuances, connotations, descriptions, expectations, and “scripts” or algorithms for dealing with particular types of watery situations. All the elements of this collage tend to be correlated in our world, so we draw a line around them with a purple crayon and slap a label on them, water, and go about our lives. We don’t have to consider the relative importance of the different elements of the collage (in terms of being defining characteristics of the collage) until some clever philosopher contrives a fanciful thought experiment, and asks us to consider the collage if one of its elements were removed or changed.

In Putnam’s thought experiment, the element that is swapped out is the fact of water’s microphysical constitution, a fact that most of us learned in high school but which has little impact on our day-to-day lives. I suspect that many of our concepts are loose aggregates in this way, and that, because their separate components or properties tend to be correlated in our experience, we assume that the entire cluster is much more tightly integrated than it necessarily is. How many things could turn out to be different about water before you’d really feel that you could no longer call it “water”? Do you know how heavy water is? What if it were a hair heavier than you thought or a hair lighter? What if it had some weird magnetic properties you had somehow managed to avoid hearing about until right now? What if you just read that in certain fields, generated in high-energy physics laboratories, water turned orange and viscous like maple syrup? These things might surprise you, but they would hang like Christmas tree ornaments on the core concept “water.”

Other, more abstract concepts are more tightly integrated in our minds. For instance, there are no superficial properties of the concept “three.” There is not a thing you know about the mathematical concept of three that you could change without inarguably wrecking the whole thing. If you change a whisker on three, it just can’t possibly be three anymore. Water might glow in the dark (but only in the southern hemisphere during a lunar eclipse) and possibly still be water, but a number that is exactly like three but not prime just isn’t three.

Because we on Earth have only ever been exposed to water as H2O, we have not had to consider the possibility, but perhaps we have a big-tent concept of water. Maybe water is multiply realizable, like the term building. Buildings, after all, get to be buildings by virtue of their use, their functional characteristics, but can actually be constructed out of a great many things. We think of water as being H2O, because that is the only kind we have run up against, but maybe water made out of XYZ would not faze us.

On the other hand, we have strong intuitions that what something is made of, even if we can’t see it and have no direct evidence of it without sophisticated equipment, has a lot of authority in deciding what it really is. So maybe XYZ isn’t water after all, and the microphysical constitution element of the collage trumps all the others. I don’t know, and neither does Hillary Putnam. The question is a sociological one, not a philosophical one. We could send a colony to Twin Earth, give them full knowledge of the chemical difference between Earth water (H2O) and Twin Earth water (XYZ), and let them go for a generation or two, and check back to see if they call both substances water or if they have come up with another term for the XYZ kind of water. Maybe they all use the term water for both kinds of stuff, but every now and then an annoying pedant among them corrects people, the way some people tend to compulsively point out split infinitives. Maybe both H2O and XYZ get to be called water in everyday conversation, but the scientific journals use some long Latin names for the chemical formulas on those rare occasions when they need to differentiate between the two. However they go, there’s your answer.

We cleave our concepts along lines that are important to us. Microphysical constitution is important to us, so it gets a relatively high ranking. We have found it useful or satisfying in some way to let this criterion determine the extension of water. We have been told a very plausible physical story about the world around us, one involving atoms and molecules, and we believe it (for good reason). So when we make distinctions among the things in our world, we tend to give credence to distinctions rooted in this story.

The point is that any authority or importance microphysical constitution has in determining whether something is water or not derives from our goals, rules, and conveniences, and not from any immutable natural laws or any Platonic Meaning Of “Water.” It is a vanity of a particular scientistic Platonism that holds that if we only had a correct philosophy of reference coupled with correct physics, we would be in a position to determine what any given term really means. There is no “really means.” It makes no sense to speak of the meaning of a term unless you know who is doing the meaning and why. As Oliver Wendell Holmes put it, “A word is not a crystal, transparent and unchanging; it is the skin of a living thought and may vary greatly in colour and content according to the circumstances and time in which it is used.”

In terms of how I use the concept of water in my everyday life, and how I use the term “water,” the fact that it is made of H2O may well be a rather obscure piece of trivia. To assume that the reductive taxonomies of the hard sciences map precisely to our cognitive structures is scientism, pure and simple: “Now that we’ve figured out the science, we can finally refer correctly!” Meaning, as we create it in our minds, might not work like that.

Saul Kripke

Saul Kripke, in a series of lectures collected in Naming and Necessity (1972), notes that at some point scientists figured out that whales are not fish, and that that was really the right way to talk about it. They did not change the standard usage of the words “whale” and “fish”; they corrected the standard usage. Moreover, most reasonable people at the time would have quickly acknowledged this, upon being told of the biological details involved. This is because, as Kripke says, an interest in natural kinds was built into the original enterprise of classification. When people coin and use terms, they like to think that they are thereby distinguishing fundamental types. Distinctions made in terms of our current best story about what it means to be a fundamental type are ones we like to formalize in our language. Right now, for most of us, that story is the one about microphysics.

Kripke’s main target is what he called the Frege/Russell understanding of meaning, which he characterizes as identifying a term with a bundle of descriptive properties. I said above that water is a cluster concept. Kripke says that Frege and Russell (that’s Gottlob and Bertrand, respectively) would agree, and they would identify “water” with the cluster. That is, to Frege and Russell, the term “water” is just a shorthand for that cluster of properties. A consequence of this, according to Kripke, is that if some of the properties in the cluster turn out to be invalid, the whole term must be thrown out. Kripke’s take on Frege/Russell semantics is that the cluster does not have one of those clauses that lawyers stick into contracts saying, “Even if some clause herein is found to be invalid, the rest of the contract is still in full effect.”

One of Kripke’s examples involves gold. One of the properties of gold is that it is a yellow metal. According to Frege/Russell semantics (as characterized by Kripke), this is a definitional property of gold: it is one of the things that makes gold gold. What if, due to some highly implausible optical illusion, it turned out that gold was blue, and had been blue all along, but we had only thought it was yellow? Kripke rightly points out that we almost certainly would not say that, since gold had been defined (among other things) to be a yellow metal, this new discovery meant that gold did not exist, and we had some new blue metal in its place. Rather, we would just say that it turns out we were wrong, and gold is blue, not yellow.

Kripke says that when we link a term to a cluster of properties, we are not identifying the term with the cluster. Rather, we are fixing a reference with the cluster. When we coined the term “gold,” we referred right through the superficial properties by which we identified gold, to the actual thing or stuff that (as it were) lay behind those superficial properties. Any of the superficial properties could thus turn out not to be actual properties of the stuff at all, and that would not affect our reference. Stretching the point a bit (but not too much—he produces some pretty compelling examples), Kripke suggests that all the properties in the cluster could be not real properties of the referent, and the reference would still hold. We may use the cluster of properties to identify the thing referred to, but it is implicitly understood by all users of the term that the properties themselves are somewhat provisional, that the important thing is whatever it is that we (for the moment, anyway) believe possesses the properties. The properties are not the thing itself, but just a way of pointing out the thing.

This is a good example of the Platonism I spoke of earlier. The properties are the shadows on the cave wall, pointing in the direction of the reality that lies behind, or beyond the (mere) superficial cluster of properties. Kripke confronts head-on my claim that the coiners and users of a term ought to have the final say in deciding what counts as being picked out by that term. He illustrates his point using the common example of Hesperus and Phosphorus.

Hesperus and Phosphorus

“Hesperus” and “Phosphorus” are the terms the ancient Greeks used to denote the evening star and the morning star, respectively. Although the ancient Greeks (before Pythagoras, anyway) did not know it, both were actually the single object we now call the planet Venus. Kripke says that Hesperus and Phosphorus just are Venus, and always were from the moment the terms were coined. There may be worlds in which Venus does not exist, but there is no possible world in which Hesperus and Phosphorus are different objects from each other, or anything but the planet Venus.

Now I can imagine a possible world in which there are two distinct objects in the sky. Let us call them (with apologies to Dr. Seuss) Thing 1 and Thing 2. I bet I could arrange this world in such a way that if we were to teleport the ancient Greeks to that world, they would accept that Thing 1 is Hesperus and Thing 2 is Phosphorus. We should think long and hard before we say that the Greeks are simply wrong to call them that. They coined the terms, after all, to make distinctions that were important to them in their lives. They lived and died happily in their use of those terms. They used them with perfect (as far as their purposes were concerned) unanimity and specificity as to their meaning. I think that this gives them a fair amount of authority in deciding what the terms mean, and if they decide that Thing 1 is Hesperus and Thing 2 is Phosphorus, you had better make a very good case that they are wrong.

It is not enough to point out that the ancient Greeks’ scientific knowledge was wrong or incomplete. That is not what is at issue here. They would probably have changed their terminology if they had figured out that Hesperus and Phosphorus were both Venus. But for now I am interested in the Greeks that never did know that, and their use of their terms that they invented to make sense of their world as they experienced it and thought about it. They used the terms, and the terms had meaning for them. How did this meaning work?

Kripke says that the Greeks had ways of identifying Hesperus in the sky, and ways of identifying Phosphorus. But these clusters of properties, these ways of identifying them, are not what Hesperus and Phosphorus were, even to them. By coining the terms, the Greeks were fixing a reference to Venus, even though they did not know it at the time. In effect, they referred right through the properties by which they identified Hesperus and Phosphorus, to the actual thing behind them, namely the planet Venus.

Kripke’s arguments have some intuitive appeal. But rather than argue about whether the Greeks were really using “Hesperus” as shorthand for a bundle of observed regularities in the sensory input they received from their environment, or whether they were really fixing a reference to Venus, I’d like to take a step back and ask: on what basis could either claim be right or wrong? By virtue of what, exactly, can Kripke say that the Greeks were fixing a reference rather than identifying a cluster of properties?

When the Greeks used the term “Hesperus,” did they thereby instantly pick out something several light-minutes away, and if so, does this process of picking out violate relativity theory by traveling faster than light? Could we verify or disprove Kripke’s claims by building a device to detect the invisible meaning rays that connect a user of the term “Hesperus” to Venus? Of course not. Reference is not an actual physical process that happens in the real world. So if reference is not a process of physical causation, what is it? It is nothing. Nothing, that is, except some (admittedly mysterious) stuff happening in the mind. If you hear me use the term “water” (more physical causation, involving vocal cords vibrating, waves of pressure moving through countless air molecules, pushing on an ear drum, etc.), then I induce some stuff to happen in your mind. Some of this mental stuff may include certain “raw feels,” expectations, equivalence relations, tests, and who knows what all else. But it is mental stuff, in the mind only.

The only real questions about semantics concern what minds do under the influence of terms, both internally and externally generated. Put another way, once God created all the physical facts of the universe, as well as the facts about consciousness (or, depending on your outlook, including the facts about consciousness), there was no more work for Him to do to create all the facts about reference. Except insofar as it reflects something about how minds work, reference is an explanatorily useless concept. Moreover, I see no reason to think that it constitutes any kind of phenomenon in need of explanation beyond straightforward physical causation (except, again, insofar as it is a product of conscious minds, in which case it is very much in need of explanation, as are all conscious phenomena). So if reference is not a physical phenomenon, and does not even supervene on physical phenomena (reference travels faster than light, after all), and reference is explanatorily useless and does not itself constitute an explanandum worthy of the name, how is it that anyone could have a theory of reference that they claimed was “right,” and that other theories were “wrong”?

What does Kripke himself cite as the final authority to back up his claims about fixing references? He produces some good examples (like the blue gold described above) that incline us to think that his claim about “fixing a reference” accords with our intuitions about the way reference ought to work. Is this enough to convince us that reference really does work that way, though? When we do philosophy of reference, are we just being descriptivist dictionary compilers, coming up with concise articulations of everyday usage?

Ultimately, Kripke seems to think that his particular Platonic notion of reference goes through because we want it to. Perhaps it isn’t so much the case that Kripke thinks that this Platonism is objectively true of the universe, but rather that it holds true because all language users are Platonists at heart. As Kripke puts it, a desire to classify things into categories of natural kinds was built into the original enterprise of language use. We all go about our lives knowing that whatever clusters of properties we use to identify things are somewhat ad hoc, and subject to revision if we come across evidence that the underlying reality is different than what we thought it was.

When phrased this way, assuming I haven’t misunderstood and/or misrepresented Kripke, his arguments are not so different than mine. This reference-fixing, the Platonism, is not an actual feature of the universe: it is a fact about how our minds work, and our needs and desires with regard to language construction. We want to classify the world in certain ways, so we build that imperative into our notions of reference. The final authority for deciding that water is really H2O, then, is our goals and intentions in using language in the first place, and that’s why the Greeks were really referring to Venus even though they didn’t know it.

Unfortunately, I think this charitable reading does misrepresent Kripke. While he does talk about our desire to classify things in a certain way, it is pretty clear from the absolute way in which he phrases his claims that he thinks of reference as a really-there, actual fact of the universe sort of thing, in a robustly externalist way. It is necessary that Hesperus is Venus, and it is necessary that water is H2O, and the Greeks would be wrong to call my Thing 1 Hesperus, not because of caveats and codicils they had written into their original charter establishing the goals and rules of their particular linguistic enterprise, but simply because they would be absolutely, objectively wrong, and that’s that.

Modes of Presentation

Sometimes the notion of modes of presentation is invoked to solve problems like the Hesperus/Phosphorus situation. The idea is that while Lois Lane knows that Superman can fly, it would surprise her to discover that Clark Kent can fly. But Clark Kent and Superman are one and the same person (that is, the term “Superman” and the term “Clark Kent” have the same extension), so in some sense the claims that Superman can fly and Clark Kent can fly should convey exactly the same information. They both make the same claim about the same individual. To resolve the apparent conflict, it is argued that any given claim must be understood under the proper mode of presentation. Superman and Clark Kent may in fact be the same collection of molecules, but facts about them are subject to their mode of presentation, just as Hesperus and Phosphorus are both the same chunk of rock in space, but had different modes of presentation to the ancient Greeks.

As far as I can tell, “modes of presentation” is just a way of covering for incomplete or incorrect information. Lois Lane knows that Superman can fly but would be surprised to find that Clark Kent can fly because she walks around with an erroneous model of reality in her head in which Superman and Clark Kent are two distinct individuals. She has drawn incorrect inferences about the world. She has, in fact, been deliberately and systematically deceived by the individual who is both Superman and Clark Kent. Hundreds of issues of the comic over the years have been devoted to the elaborate machinations he employs in order to lie to Lois.

In the same way, sometimes you might read about Pierre, who has read that London is a beautiful city, one he would like to visit one day, but who once had to take a business trip to an awful, drab, and smoggy place called Londres. We are told that Pierre has been exposed to the same city in two different modes of presentation. I prefer to say that Pierre’s model of the world is simply wrong. He thinks there are two cities, when in fact there is only one. Once he has this one incorrect “fact” in his reality model, he fleshes out his placeholder templates for these two cities with a whole lot of provisional details, or knowledge of the fact that the details are missing, and he bases his expectations, desires, beliefs, etc., on this incorrect model of the world. Maybe someday he will correct the mismatch between his internal model and external reality, or maybe not. Either way, there is nothing deeply mysterious about any of this.

Any problems in thinking about these situations stem directly from the intuition of the invisible magic meaning rays that connect our thoughts and references with the outside world—the idea that reference is exclusively or even primarily some kind of instantaneous connection between something in our thoughts (or Lois Lane’s thoughts or Pierre’s thoughts) and the outside world. I do not know exactly what reference is or how it works, but if it is to have a precise meaning at all in the sense of being philosophically interesting or useful, it must be defined as a relationship of some kind between thoughts. Lois Lane’s term “Superman” refers to a Superman data structure (or, if you prefer, “concept”) in Lois’s mind. There is nothing problematic in saying that, for Lois, the claim that Superman can fly and the claim and Clark Kent can fly convey very different information because, for Lois, the “Superman” data structure is simply a different one than the “Clark Kent” data structure. She formed both by drawing inferences from lots of perceptual experiences she had. The data structures then contribute to her expectations of the kinds of perceptual experiences she is going to have in the future.

The Contents of Our Thoughts

An idea closely related to invisible meaning rays and Platonic Meaning is that of the content of our thoughts. Contents are often said to be carried around in vehicles. That is, a proposition, abstractly construed, is the content, and its particular articulation in language and/or thought is its vehicle. The content of a thought is a lot like the extension of a word. It is whatever the thought is “about.” I find the term at best to be a strong pretheoretical nudge in a particular direction, and at worst grossly misleading.

I may have a Honda Civic, i.e. a vehicle. If I put a cake in the Civic, then the cake constitutes the contents of the vehicle. I could have put the cake in a different vehicle, in which case that other vehicle would have had the same contents that the Civic now has. Or I could have put some old newspapers in the Civic, in which case the same vehicle would have different contents. The vehicle is blank, empty, until I put some contents into it. These are the sorts of images and relationships we drag into play as soon as we invoke the highly loaded terms “content” and “vehicle.” I have thoughts, that is all. As far as I can tell, I have no separate “contents” of those thoughts.

Picking Out, Functionally

Extension is the stuff in the universe that a term “picks out.” Of course, terms do no such thing. With apologies to the National Rifle Association, terms don’t pick things out, people do. Extension seems like a reassuringly concrete idea: the extension of the concept of water is a set of actual molecules out there in the actual world. But extension is not so clear cut. Putnam allows that determining extension requires an equivalence relation. We cannot specify all the occurrences of water on Earth without having a way of saying “all the stuff that is equivalent to this stuff here in this glass.” This equivalence relation, the criteria we use to decide if something is water or not in various real or imaginary scenarios, is the intension. Extension is supposedly concrete, while intension is rather more abstract (remember that intension is a function that maps possible worlds to extensions on those worlds).

But you can’t get to the extension without going through the intension. Thus extension is itself something of an abstraction: we can never, in practice, enumerate all the molecules of water in the universe, so we can never actually pick out the extension of the concept of water. We are always at a certain remove from anything’s extension; all we really have at our immediate disposal is intension. All we can really do is talk about the general kinds of things we would consider water. What we really are talking about when we use the phrase the extension of water is a bunch of tests we can apply to different situations, ways of applying some equivalence relation. Importantly, we apply those tests, we pick out the water. By itself, a term just sits there.

How do we know about water’s microphysical constitution, anyway? Most of us simply read it in a book or were told it in school and accept it. Some of us ran tests with instruments. Originally, sometime after 1750, someone ran such tests, and inferred the microphysical constitution of water from the results of those tests. But the results themselves, the raw data, are functional properties of water, facts about how water behaves in different circumstances. These sorts of properties are no different in kind than the results of the “tests” I run when I smell water, dip my hand in it, taste it, etc. The fact that in one case the instrumentation involved was built by people, and in the other case the instrumentation consists of devices I was born with (tongue, fingers, eyes, etc.) does not make any difference in terms of the type of property of water we are talking about. For Putnam’s Twin Earth thought experiment to go through, there must be at least some “superficial properties” of H2O and XYZ that differ. Otherwise, how would any scientist ever have told the difference? At some point, if you feed H2O into a mass spectrometer, you get one result, and if you feed XYZ in, you get a different result. Different raw data equals different “superficial properties,” just as much as if H2O and XYZ tasted different.

I suppose someone could still insist, for the sake of the argument, on hypothesizing a substance that behaved exactly like H2O as far as current science was able to determine, but which really was not H2O. I could take the standard cop-out that people sometimes take with thought experiments and demand details. I guarantee that no one could possibly specify such a situation at any satisfying level of granularity. But the standard cop-out would lose a more important point. It is, in principle, literally nonsensical to speak of something that behaves exactly like H2O but isn’t really H2O. As I and lots of others have pointed out, science doesn’t really claim, at heart, to tell us what is really going on out there in the world. It only specifies a bare schema, a circularly defined pattern of functional dynamics, but it is silent about what is doing all that functioning. To act exactly like an electron is to be an electron. There is no such thing, by definition, in principle, as something that acts exactly like an electron but really isn’t an electron. By the same token, there is no way something could behave exactly like H2O but somehow not be H2O.

When it comes right down to it, our relationship to the outside world is entirely functional. That is, we know everything we know about the world because of the world’s dispositional properties, its behavior. Water is as water does. There simply is no essence of water that does not manifest itself functionally, at least none we could ever know scientifically, even in principle. Any time we speak of reference with regard to something out there, we are talking about reference to a bundle of functional dispositions. This is functionalism turned on its head: it is not the mind that must be understood in functionalist terms, but the world. It is incoherent to speculate that XYZ and H2O do not differ in at least some “superficial” properties. The microphysical constitution that Putnam regards as the sole determinant of true wateriness is a story (a story that exists—yes, Hilary—in our heads) that we inferred from various superficial properties.

Being Scientifically Correct vs. Being Linguistically Correct

Now I happen to like that story. It is remarkably powerful and parsimonious in its ability to link all kinds of phenomena in the world, confer cognitive power upon us, organize our mental economy efficiently, and, ultimately, help us invent microwave ovens and rocket ships and all sorts of other things. But it is not the only imaginable story.

We should steer clear of the assumption that the pre-1750 people used their rough and ready conception of water only provisionally, and that they were waiting for science to tell them about water’s microstructure so they could be more precise. Pre-1750 people, whether or not they had ever heard of Aristotle, were basically Aristotelians. They already knew the elemental constituents of water—namely water. Water was simply one of the basic kinds of stuff their world was made of, and most people didn’t question whether or not water might be made of anything still more basic. Their understanding of science was wrong, but their ability to refer was working just fine.

What if there were a prescientific tribe of people somewhere that had two words for water. Water referred to the water from the river, that brought life and was good and blessed by the gods, but shwater referred to the evil water from the spring that was cursed. No explaining that water was chemically identical to shwater would make them change. Microphysical constitution is just an unimportant property to them compared to the essential goodness or evil of the water/shwater. The goodness or evil determines what the substance “really is.” What if we change the story a little, and suppose that our tribe isn’t prescientific after all. Let’s say they understand about chemistry and H2O, but still hold their religious beliefs, with full acceptance that there is no empirical basis for them. They have chosen a different property, a different element of the collage to define the essential nature of water/shwater.

Lore has it that the Eskimos have 100 words for snow (the actual number seems to vary a lot depending on where you read this old chestnut). Let us imagine that one of their 100 words is spelled and pronounced exactly like our word “snow.” This is like the situation with the pre-1750 people calling both XYZ and H2O “water,” only with us playing the part of the pre-1750 people, riding roughshod over what to others (the Eskimos in this case) are important distinctions. We aren’t right and the Eskimos aren’t right. We all just make the distinctions that are important for us to make, and we don’t waste time coining a lot of extra terms to allow us to split hairs we don’t have to split. A term is only as precise—can be only as precise—as is necessary to make the discriminations of interest to the community of users of the term. I (or my culture) define terms in the interests of setting up my linguistic palette to get the maximum cognitive or communicative bang for the buck. There is no right or wrong answer as to the narrowness or breadth of my definition of the term “water.”

If you ask me as an English speaker if XYZ counts as water, I may think for a moment or two and then give you my opinion, which I made up just then. I may then give you arguments for my opinion, which you may or may not accept. My opinion may or may not be in accord with that of the majority of the rest of my linguistic community. It may or may not even be in accord with the dictionary definition of the term “water.” But my answer is still just something I made up. Of course, that is what all language ever is—at some point, someone just makes stuff up, and other people adopt that convention in their speech. If, on the other hand, you asked me as a philosopher if XYZ really counts as water, I’m afraid I would have to ask you to rephrase the question, because, as stated, it is too loaded with presuppositions to admit a yes/no answer.

There is some stuff out there in the world (water), and our interactions with it have led us to attribute some “superficial” properties to it. We also have a story in our minds, an explanatory framework that we have found to be very useful (our current physical theories about atoms and molecules and such). Some of this stuff’s superficial properties have led us to infer that it fits comfortably into a particular place within this explanatory framework. The success of a particular scientific theory or another does not absolutely (and retroactively!) determine meaning. Whenever we have a collage of data (superficial properties), we infer a story to bind it all together. The story is the purple crayon we use to demark the collage. It is this story that we cling to as the determinant of meaning, the crucial defining characteristic of each of our concepts. It determines the equivalence relation, the intension, that in turn determines our tests for inclusion in or exclusion from the extension. This story, and thus meaning itself, is in the head.

The main point here is that the story about molecules and such, the explanatory framework, is entirely in our heads (although there is a strong likelihood that there are things out there whose dynamics map nicely to this framework). We cannot say what anything “really is” beyond where it fits into our explanatory frameworks based on its observed “superficial” properties. In certain contexts, I can be as smug about my superior scientific worldview as anyone. I can say, without batting an eye, that my way of parsing the world is flat-out better than that of the superstitious medieval peasant. It is more accurate, more cognitively economical, and has vastly more explanatory and predictive power than the peasant’s.

But that (possibly justified) hubris does not spill over into the realm of meaning. My saying that the peasant should have cared about the distinction between gills and lungs may be true, but irrelevant to how the peasant meant when he used the term “fish.” The fact that we like our reality model better has no bearing on the way our words relate to our (possibly superior) reality model as opposed to how the ignorant peasant’s words related to his reality model, and how those relations matched up with the corresponding relations in the minds of the other peasants. Given a particular framework or reality model, one that works well enough to get by for a community’s interests, terms mean relative to that shared framework or model, both in the way multiple people communicate with each other, and in the way terms enable or perhaps even constitute an individual person’s thoughts.

Putting Meaning Back in the Head

So what is going on in our minds when we use the term “water,” either saying it, hearing it, or thinking it? That is the million-dollar question. A very interesting question, yes, but a question about what is going on in here, in the mind, and not a question about any notion of “meaning” beyond that. I have characterized the concept of water as a cluster, a collage, but I have said that it involves equivalence relations or tests we apply to situations, and that it is delimited by a story that we infer from experience. Obviously, this all needs a lot of clarification. Do I even have one single thing in my mind that I can call my concept of water? Does it, strictly speaking, have a fixed identity that persists over time? If so, how much of it can you change before you feel compelled to call it a different concept altogether? Do concepts subsume other concepts? What part do qualia, the what-it’s-likeness of water’s wetness, its (lack of) taste, etc., play in all of this? How much relative weight does Kripke’s project of language use (that of dividing things into categories of natural kinds) have? These are the truly interesting questions about the limits of the meaning of the term “water,” but these are all straightforwardly questions about minds. There is a lot of stuff going on in our heads and it will take considerable work to sort it all out.

One thing we can speak of with confidence, however, is the relationship between all this mysterious stuff happening in our heads and the outside world. We do not directly perceive matter. There is a long, twisty causal chain that links certain events that happen in the physical world with percepts and concepts in the mind. Or, perhaps more suggestively, our concepts and percepts are constrained or influenced by these events. Until we understand the concepts in our heads better, the details of the influence of the external events upon them will remain murky, but the input channel itself is pure good old-fashioned physical causation.

Of central importance to any discussion of language and meaning is the notion of intentionality. Intentionality is the property of being about something else; it is sometimes informally defined as “aboutness.” Beliefs, desires, and propositions all have intentionality, while rocks and teacups do not. Intentionality is real, it exists as a feature of the universe. There are some things that really are, inherently, about other things. All such things, however, are exclusively in minds. In a purely objective, extrinsic, materialistic world, everything that happens does so strictly according to the laws of physical causation, like so many beer cans perched on fence posts hit with rocks. No matter how many beer cans you have, and no matter how they may be connected (with dental floss, perhaps), there is no inherent sense in which some set of them “come together” to be about another set of them. They just do what they do because they must, each of them blind to all the others, with no subset of them “representing” other subsets (or anything else for that matter) except insofar as we choose to see them that way with our conscious minds.

Sometimes it is convenient for us to speak and think as if things out there were really about other things (road signs about gas stations, for example), but this is a may-be-seen-as kind of thing, a way of talking about what is, at heart, lots of complex physical interaction. Left on their own, the mechanics of the physical road sign, and its interactions with photons of light, up to the point at which those photons interact with your nervous system, are well understood without recourse to any notions of “reference.”

So we have (1) molecules of stuff somewhere out there in the world in our rivers and streams. These molecules cause physical events to occur, which cause still other events, etc., until some event(s) in this chain ultimately impinge in some way upon (2) some mysterious things happening in our heads; and finally we have (3) our observable linguistic behavior, which presumably is caused or influenced by (2). We have a long way to go before we understand (2) and the exact relationship between it and (1) and (3), but once we do understand these things, there will be nothing left to explain about language and meaning.

It is sometimes said that meaning is merely mediated by causal connections between the outside world and our minds. I, however, would say that meaning just is those causal connections, plus some mysterious stuff happening entirely within the mind. Any talk of meaning beyond this has no explanatory or predictive power. There are no facts about the universe—either extrinsic, third-person “scientific” facts, or subjective phenomenal what-it’s-like-to-see-red-type facts—that are explained by assuming invisible magic meaning rays connecting our thoughts to trees, cars, and the Milky Way galaxy. The causal chain between physical events that happen in the world and the concepts we form in our minds may get very complex, but it is still just billiard balls knocking together. There is no other kind of connection between the stuff out there and our concepts in here. The problem with the term “extension” is that it strongly inclines us to believe that there is. It presumes a sort of spooky mystical connection between the collection of molecules of H2O in the universe and our internal concept of water. There is no such connection.

When someone points, they are telling you to do something—look over there. A reference is a pointer, and as such, it is prescriptive, not descriptive. It commands. Even this, though, gives it too much credit. It doesn’t actually do anything—it just sits there. It is a lot like an algorithm in this sense, and in fact is a degenerate case of an algorithm. As such, unto itself, it is neither true nor false; it neither represents nor misrepresents, it just does its physical clacking and bonking as do all physical things.

I have argued in this chapter that we should construe meaning and reference internalistically—that is, as an intra-mental phenomenon—and not externalistically—that is, as some kind of connection between minds and the world. Doing so allows us to zero in on whatever it is that makes it special or unique, as distinct from regular old physical causal bonking. To a physicalist, this could all be well and good, but it could still be an “easy problem” kind of cognitive trick, something evolution programmed into our brains, and thus a specialized sort of circuit, optimized for a certain task. In this case, it would be a special case of the aforementioned causal bonking. We all have an intuition of aboutness, or intentionality. I happen to think that there is something really there, something worth exploring and characterizing more precisely, that manifests itself in my qualitative sense of referring. And it is very much a qualitative sense. How do you know you refer at all? There is a what-it-is-like to mean. If intentionality is to be a really-there thing at all, it is a spooky, mysterious, in-the-mind-only kind of thing, like the redness of red. Like redness, it really exists, but in order to account for it properly we will have to overcome our unease at its spooky mysteriousness.


Reference: Turning Out

Two-Dimensional Semantics

Two-dimensional semantics is getting some attention these days. David Chalmers has been writing about it (2006), as have other people. The idea behind 2D semantics is that intension alone, characterized in the possible-worlds sense, does not quite capture meaning. Specifically, there are terms whose intension is the same (i.e. the terms pick out the same extension in all possible worlds), but that seem as though they have different meanings anyway. I’ll hand the mic over to Chalmers here:

According to Kripke, there are many statements that are knowable only empirically, but which are true in all possible worlds. For example, it is an empirical discovery that Hesperus is Phosphorus, but there is no possible world in which Hesperus is not Phosphorus (or vice versa), as both Hesperus and Phosphorus are identical to the planet Venus in all possible worlds. If so, then “Hesperus” and “Phosphorus” have the same intension (one that picks out the planet Venus in all possible worlds), even though the two terms are cognitively distinct. The same goes for pairs of terms such as “water” and “H2O”: it is an empirical discovery that water is H2O, but according to Kripke, both “water” and “H2O” have the same intension (picking out H2O in all possible worlds).

So Kripke’s claim (as paraphrased by Chalmers) is that because we now know that they are both just Venus, Hesperus and Phosphorus both must pick out Venus in all possible worlds, and so have the same intension (same extension in all possible worlds = same intension). Yet most people would agree that “Hesperus” does not quite mean exactly the same thing as “Phosphorus.” To accommodate this in our theory of semantics, the following reasoning is invoked. Because of the way our actual world turned out, Hesperus is Phosphorus is Venus, and this must hold true across all possible hypothetical worlds. But if we imagine for a moment that our actual world had turned out differently, and in our actual world Hesperus was a different object than Phosphorus, and then we let our imagination range across all possible worlds, we might come up with a different intension for each world so considered.

So essentially we set up a grid: first, along one axis (say, the vertical axis), we lay out all possible worlds, and imagine that, for each of them, that is the way our actual, real world might have turned out. Then for each of those (i.e. for each horizontal row on the grid), we do the old-school possible worlds exercise, considering each possible world as hypothetical (along the second axis, the horizontal one), given that the possible world on the first axis is being considered as actual.

2D semantics is motivated by the Platonic impulse: the certainty that what something “turned out” to be in our actual world somehow fixes its meaning absolutely for all time and in all contexts. Thus, in order even to toy with the idea that things might have “turned out” differently in our world, we have to add a whole new dimension to our already infinite array of possible worlds. So instead of simply (!) considering infinite possible worlds, you must consider infinite possible worlds for each possible world, with the possible world on the vertical axis imagined as the way the actual world “turned out.” If possible worlds scenarios are clunky, then 2D semantics is clunkiness squared.

Does anybody imagine that when a little kid learns a new term—say, “Mommy”—that kid constructs a two-dimensional array in her head and fills in all the spaces in that array with the appropriate intensions and extensions of “Mommy” in all possible worlds as demanded by 2D semantics? Of course not—no one thinks this. So if 2D semantics is not a theory of what actual language users do when they acquire and use terms in the real world, what is it a theory of, exactly? If 2D semantics is the answer, what was the question?

Turning Out

The whole point of needing a second axis (i.e. the second dimension) in 2D semantics is that in our world, renates all turned out to be cordates. Hesperus and Phosphorus both turned out to be Venus, and water turned out to be H2O. We may imagine possible worlds in which things could have “turned out” differently. This phrasing is misleading in that it draws a sharp distinction between a “superficial” acquaintance with the concept of water on the one hand, and what water “turned out to be” on the other. Water has not turned out to be anything. We could still find out all kinds of things about water that would surprise us. I could be in the Matrix with a cable jacked into the back of my neck, or in a “real” world in which physics is completely different, and in which there is nothing remotely resembling water. Perhaps in prescientific times, peoples’ conception of water underwent revisions along the way, before people figured out about atoms and molecules.

While sometimes we discover big important things about stuff we thought we already understood pretty well, the process of turning out is unfolding all the time, and is never finished. We never resolve symbols “all the way down.” A possible exception to this might be things that are defined as part of a self-contained system in which everything is circularly defined explicitly in terms of other things within the system, as in mathematics. But even then, we may still discover new truths and untruths within the system that reflect back on our original basic terms. In real life, concepts do not float free, then one day “turn out”. They are always turning out; they never stop turning out.

We have a set of empirically derived properties of water on the one hand (odorlessness, transparency, etc.) and another set of empirically derived properties on the other (inferred microphysical constitution), and these two sets of properties have always seemed to coextend in our world. When we let them float free of each other in our imagination, we have to decide for the first time which set gets to keep the tag “water,” like a judge deciding which of a divorcing couple gets to keep the house. Because there are two sets of properties, we need two axes in our n-dimensional grid, hence 2D semantics. There could be any number of sets of empirically derived properties of water, however, so the number two is arbitrary. We actually would need as many axes in our infinite grid of possible worlds as we can come up with logically independent sets of empirically derived properties.

Imagine a stone-age people who had a word, “poog,” that meant, to them, “tool or weapon.” As time went on, and the civilization advanced, the same term, “poog,” might come to mean more specifically “pointed stick used as a weapon.” Later still, it might mean “spear made of ash.” Would it be right, then, to characterize the situation by saying that “poog” turned out to mean a spear made of ash, and that really had meant a spear made of ash all along? That the stone-agers who called a rock a poog turned out to be wrong? Would anything interesting be revealed about what meaning is or how it works by hypothesizing a Twin Earth in which the inhabitants used the word “poog” to refer to spears made of birch?

A pre-1750 person—say, Isaac Newton— had a significantly different model of reality in his head, but he had experiences and memories similar to mine, and he fit his experiences and memories into his model. In both our cases, “water” is defined, at least in part, relationally—in terms of where it fits in the reality model relative to lots of the model’s other elements. But my concept of water has certain associations within my reality model that Newton’s did not have, associations that further constrain the concept. There are fewer possible universes that contain stuff I would agree was water than there are for Newton (assuming that I buy into the idea that water is and must be only H2O).

I prefer my model of reality to Newton’s. I like the neatness, the power, the integrity, etc., of my scientific picture of the world. But in terms of what is going on when we refer, water has not “turned out to be” anything. Newton and I have different reality models, with different constraints upon how we categorize the stuff we find in the universe. Based on our different models, our concepts of water have different satisfaction criteria.

This is not oops-my-brains-just-fell-out relativism. I like science. I believe in science. Atoms are real. Newton was ignorant. But it is a strange form of scientific hubris to build Newton’s ignorance of our science into a theory of reference, or to reify the distinction between “prescientific” notions of water, Hesperus, or anything else on the one hand and the way things “turned out to be,” or the way they “really are,” on the other, and to imagine that this alleged distinction tells us anything interesting about meaning. Just because a cathedral is made of stones, it does not follow that my concept of a cathedral is made of my concept of stones, and just because water is made of H2O, it does not follow that my concept of water is made of my concept of H2O.

Symbol Resolution

The mathematical notion of symbol evaluation is partially to blame for the bias philosophers have for this idea of “turning out.” In algebra, you can have a variable, x, that everyone can see is a variable. It can be manipulated as a variable, but at some point you may resolve it, by substituting a number, like 43, for it. There is an unambiguous, explicit delineation between the variable before it was resolved, and the value it has afterward. There is a universally understood sense in which x is unresolved, and exactly what aspects of it obey certain mathematical rules anyway, and what aspects of it are left unspecified.

As we generate and parse natural language, things are almost never that neat. Symbol evaluation in natural language is not an either/or kind of thing, as it can be in mathematics. For most of the terms we use in daily life, there are various degrees of specificity of resolution, and we resolve terms or inhibit their resolution to the appropriate degree, and in the appropriate order, according to all kinds of rules of context as we string terms together in our thoughts or utterances. Modern semantic theory posits a very sharp distinction between a term’s intension and its extension. The trouble is, rigidity of designation, to use the philosophical term, is a sliding scale. Parsing and generating language is less like symbol resolution as traditionally conceived than it is like tuning a complicated musical instrument.

Early vs. Late Binding

In certain contexts in computer science, the term “binding” is used to describe symbol resolution: a variable expression is “bound” to a particular value, and thus ceases to be a variable. Furthermore, there is an idea of “early binding” and “late binding” of variable expressions. The idea is that you can have a variable, and you can resolve it right away (early binding), and then feed it into other calculations, or you can let it exist as a variable in those calculations, and then resolve it to a specific value at the end (late binding). Sometimes you can get very different results depending on when you do your variable bindings.

Some of the sense of this can be illustrated with the slightly awkward sentence, “By the year 2050, the president of the USA will be a woman.” The likely intent here corresponds to late binding of the term “the president of the USA.” We let that term float in the abstract as we evaluate the sentence, knowing that it will not be resolved until 2050. Or we could bind it early: as I write this, the president of the USA is Joe Biden, so the term “the president of the USA” resolves immediately to “Joe Biden,” and the sentence then states that, by the year 2050, Joe Biden will be a woman, a considerably less likely claim. Different terms seem to call for earlier or later binding, more or less specific resolution depending on context (which, of course, is made of other terms, which need to be resolved as well).

A great deal of the jargon associated with philosophy of semantics can be recast in terms of early vs. late binding. To me, this is often clearer. When Kripke speaks of fixing a reference as opposed to identifying a term with a cluster of properties, he is talking about early binding as opposed to late binding. When the Greeks coined the term “Hesperus,” they bound it early (if unknowingly) to the actual thing, Venus (at least, that’s what Kripke thinks). Kripke attributes to Frege and Russell the counterclaim that it is OK to bind terms late, and that the Greeks let the properties float free of any binding, so there could be a possible world in which Hesperus is something other than Venus. If the “superficial properties” are the x, and Venus is the 43, Kripke says that as soon as the Greeks said x, they immediately meant 43, even if they didn’t know it. Frege and Russell, on the other hand, say that it is fine to let x stand in its own right, and we could perfectly meaningfully find out later that x is 43, or 23, or 101.

Gareth Evans’s example about Julius also boils down to early vs. late binding. The idea here is that we allow the term “Julius” to refer to whoever invented the zipper (if anyone did) in whichever particular possible world we are considering. Semantic hijinks ensue from considering how, and to what extent, “Julius” refers to an actual person in any given world. Here we see that by hypothesis, “Julius” floats free of any binding (i.e. it is late-bound). “Julius” is defined by a descriptive criterion only, and is not bound to a particular individual until we touch down in a particular world, at which point the variable gets bound to the actual person who invented the zipper in that world. Once again, though, the example is somewhat contrived. It is set up to mimic mathematics rather than real life. “Julius” is a bistate term: either unbound or bound. In its unbound state, it is strangely specific about how to bind it, and there is a clear, unambiguous distinction between its bound and unbound states. It seems designed to be as close to an algebraic x as English prose can get.

Another example is one that William Lycan cites in his introductory book Philosophy of Language (2008): “I wish that her husband weren’t her husband.” In the first instance of the term “her husband,” it is early-bound, and picks out an actual guy, but in the latter instance of “her husband,” it is late-bound (or, rather, not bound at all within the sentence, but still waiting to be bound by the time the sentence ends). In its late-bound state, the term is allowed to persist as an abstract specification, as criteria for some future binding to an actual person.

There is something powerful about our ability to defer binding for a bit, and manipulate a placeholder according to rules of syntax. It helps us build structured thoughts of greater intricacy than we might if we were playing directly with more fully fleshed-out concepts. I imagine that one of the limitations of being an animal is that all binding occurs early—just about all stimuli get funneled immediately into Dennett’s four F’s: fight, flight, feed, or mate. You just can’t do much with that, cognitively speaking.

This distinction between early and late binding is really what motivated 2D semantics. In ordinary 1D semantics, with only a single infinite array of possible worlds to consider, you bind your terms early, according to what they mean in our actual world. This early binding corresponds to what is sometimes called a term’s secondary intension. So water’s secondary intension is H2O, for example. Then, once that meaning is fixed, you let your imagination range over all possible worlds, picking out the extension on those worlds (i.e. the H2O on each world).

This, at least, is how Kripke characterized it in his objection to 1D semantics that Chalmers paraphrased above. But in 2D semantics you allow for some late binding as you consider possible worlds. In the first part of the 2D semantics exercise, when you are considering each possible world as actual, you let some more abstract version of the term float over all possible worlds, and do your binding in each imaginary possible world, and then, with the meaning thus fixed, let your imagination range over all possible worlds. This is sometimes called the primary intension of a term. While water’s secondary intension is H2O in all possible worlds in the 1D semantics case (we bound it early, in our actual world), water’s primary intension is H2O in our world, but XYZ in Putnam’s Twin Earth (we bind the abstract specification—the watery stuff—to the actual extension late: after we’ve switched our attention to the hypothetical XYZ world, i.e. considered it as “actual”).

It is assumed that there is no ambiguity in deciding what aspects of a given term should be allowed to float free across possible worlds to be bound by the contingencies of each one, and what aspects are constant across all worlds, both considered as actual and considered as counterfactual. That is, which aspects of water are to be considered part of the abstract characterization (e.g. its odorlessness), and which aspects are the actual essence that the “superficial” properties “turn out” to be (e.g. water’s microphysical constitution). It is also assumed that there is the abstract characterization (unbound) of a term, and the actual extension (bound), and none but those two completely discrete states. That is, you have the variable, the x (the watery stuff in the environment), and the value it resolves to (H2O or XYZ). There is some serious fetishization of mathematics going on here among philosophers that causes them to shoehorn reference into the binary symbol resolution model (mathism?). The collection of “superficial” properties of water (clear, odorless, liquid, etc.) is the x, and it was an unresolved variable for eons, as we humans ignorantly used the term “water” not knowing what it really was. Then our scientists figured it out, and now we know that water “turned out” to be H2O! We found the answer: x is 43!

But early and late are relative terms. Moreover, the whole notion of binding, no matter how early or late, is really the same thing as symbol resolution, and subject to the same problems. How narrowly do we construe or intend terms? How figuratively are we speaking or interpreting a term at a given moment? What aspects of a concept do we consider fair game to abstract away and what aspects do we hold constant as we do our figurative construing? In the Twin Earth thought experiment, it was taken as a given that water’s “superficial properties” were to be held constant, and its microphysical constitution could be abstracted away as we considered different scenarios. But in real life, the narrowness or broadness of construal of a term, and the aspects of a concept we choose to hold constant and the aspects we feel free to abstract away, and exactly when we bind our terms to specific extensions (“resolve” a more abstract characterization of a term to a more specific extension) can vary wildly, often along a continuum, and are highly context-dependent, even within a single sentence.

Haters Gonna Hate: Some Tautologies

To illustrate this point, I’d like to close out with a few tautologies. A tautology is an expression of the form x = x. Since x is always equal to x, regardless of what x actually is, tautologies (in theory) convey no information about x or anything else. A fancy way philosophers have of saying this is that tautologies have no “semantic content,” and thus (in theory) have no meaning. But as with so many aspects of language, theory and reality do not always line up. Let me indulge here in a bit of fiction.

Jimmy and Frankie grew up together in the same working-class neighborhood. In their pre-teens they stole hubcaps together, then later whole cars. Soon enough they hooked up with the mob and worked together. Some years go by, and their bosses become aware that Jimmy is skimming a little off the top each month. As a test of loyalty, they send Frankie after him. Frankie has no trouble cornering his old friend, and in the ensuing confrontation, Jimmy pleads, “Frankie, its me, Jimmy. I’ve always been there for you, Frankie, more times than I can count. This can’t be the end, Frankie. Not like this. I know I screwed up, I screwed up bad. And you know I’ll make it up, Frankie, you know I will. Come on, Frankie, please!” Frankie says nothing for a moment, just looks at Jimmy with unblinking eyes. Then he quietly says, “Business is business, Jimmy.”

Or how about this conversation:

“Every time I think about the Holocaust, it shocks me all over again. You’d think that after hearing and reading about it all these years, I’d be jaded, or numbed, but no. I still can’t get my head around the enormity of it, the reality of it.”

“Hey, what happened, happened.”

“What do you mean? It wasn’t just something that happened. Real people did it! A government staffed by human beings coolly presided over the deaths of millions!”

“People are people.”

“How can you say that? Killing six million Jews is not normal human behavior!”

“Well, you know, Jews are Jews after all.”

“You jerk! What kind of a Nazi are you, anyway?!”

Then there is always the trendy “It is what it is.” Along the same lines, there is the saying that by the time you are thirty, you must accept that no one is your mother, not even your mother; or the oft-ignored advice to a king, “If you want things to stay the same, sire, things are going to have to change”; or Alfred North Whitehead’s famous slogan of process philosophy, “Things aren’t things” (Whitehead never said this. I just made it up. Sorry.)

For poor Jimmy, the supposedly information-free tautology is literally a matter of life and death. The point here is that these are not particularly special cases. People talk like this all the time. They convey lots of information in ways that a logician would say is impossible. The uses of the terms in these tautologies are perfectly valid, and must be accounted for by any theory of meaning. In these tautologies, the same term is interpreted narrowly or broadly, bound earlier or later, considered abstractly or specifically in different ways and to different degrees depending on its use in different places within the same sentence. The meanings of the terms in question are determined on a case-by-case basis, on sliding scales. Dictionaries seduce us into thinking that there is a discrete number of meanings any term can take on. To be sure, there are some stakes in the ground, but between these stakes there is often a continuum of meaning, and people slide up and down that continuum so effortlessly that they almost do not notice it.

In modern usage, the word “quick” means fast. When Shakespeare referred to the quick and the dead, he meant “alive.” It may well be that in Elizabethan times, that was a common sense of the word “quick,” one that has fallen out of favor. But to our ears, it is a poetic turn of phrase, a case of Shakespeare speaking figuratively. This figurative sense of the word “quick” plays off of its more restricted sense, and makes sense to us. It is just a broadening of the term. How broadly or narrowly we use terms is in constant flux, and highly context-dependent. There is no distinct line we cross when we use a term to mean one thing but take liberties with its breadth, and when we use a different sense of the term.

The other day on the highway I saw a flatbed truck carrying an enormous underground water tank. Obviously, the tank was not underground, yet you probably never thought of the term “underground” as referring to a type before. You probably always thought that it must mean literally under the ground. We very often, perhaps almost all the time, do not speak literally. Am I speaking figuratively, metaphorically, then, when I mention an underground oil tank when it wasn’t underground at all? Well kind of, I guess, but no, not exactly.

Most people are perfectly comfortable using a term figuratively in one breath, and literally in the next, to varying degrees depending on all kinds of variables. Ambiguity lurks everywhere. Determined and ingenious people can tie themselves into knots, finding ambiguity just about anywhere they look hard enough. No one seems to have a problem with this except philosophers, a fact that does not speak well of philosophers.

How Should We Think about Reference?

In order to understand how we think, we need to understand how we see red. We also need to understand how we refer. These are both, in my opinion, aspects of a common problem. In order to study reference, we need to look inward, at our own minds, and ask ourselves how we do it. We must have better descriptions of our inner workings, in all their qualitatively cognitive glory. But the answers will have very little to do with whether Hesperus and Phosphorus are Venus or not.


Reference Internalized

We are a linguistic species. Our creation and use of language is one of the more amazing things about us. It obviously enables us to communicate with each other, but it also enables us to think. We can imagine even pretty smart animals as qualitative reaction machines, while our ability to think abstractly is plausibly tied closely to our mastery of language. We create symbols and rules by which we manipulate them, and we are off to the races. We can hardly imagine what our internal lives would be like without words.

Daniel Dennett (1991) cites the example of a character in a Nabokov novel (The Defense) to illustrate the effect language has on our minds. The character in question was a chess grandmaster, and at certain points in his life his mind was locked in a chess groove: “He sat leaning on his cane and thinking that with a Knight’s move of this lime tree standing on a sunlit slope one could take that telegraph pole over there…”. Dennett points out that most of us have experienced this sort of thing, in which some new skill or interest seems to saturate our entire field of experience. Dennett claims to have had the experience during a bridge binge in his youth. The rules of the game become the rules by which you filter and frame everything else.

Dennett’s point is that we are all like this all the time, in the thrall of the rules of the big master game, language. Language is the ur-game, the big rut our minds get into in toddlerhood and never get out of. Its rules become our rules, forever. If we want to understand thought, we had better understand language, and central to language is the idea of reference: of some symbol, term, or word, standing for, or linked in some way, to some actual thing. In addition to the existence of symbols themselves, the symbols have to have their own rules: hard rules of grammar and syntax, as well as soft rules of convention and idiom.

I have spent the last couple of chapters talking about reference, and making the case that if we want to speak clearly about it, we should think of it as some kind of phenomenon inside our heads, and not some kind of relation between our minds, thoughts, or utterances and the outside world. I think it is worthwhile to limit our discussion of reference to an intra-mind thing even though in colloquial usage we talk all the time about books referring to stars, signs referring to highway exits, and pointers mounted on bimetallic strips referring to temperature. As I have argued at length in this book, I think that in order to make sense of our minds, we have to pay special attention to some phenomena that most Western philosophers and scientists have not paid a lot of attention to in the last century or two. In order to make sense of reference, even at the expense of going against common usage, we should bracket off the outside world with its chains of physical causation and talk about what goes on between our ears.

The Reality Between Our Ears

To that end, leaving aside questions about distinguishing between self and percept, as well as questions about qualia, I’d like to step back now and say some folky things about how minds work. I hope this will not be terribly controversial (except at the very end) but will emphasize certain aspects of how the mind deals with the world from a strictly cognitive (“easy problem”) point of view. My goal here is not to say anything revolutionary, but to frame what we already know and (I hope) agree on in a certain way that will help us speak more clearly about it, and maybe help us speak more clearly about things like reference, meaning, and language as well.

It is safe to say that as the evolution of our species progressed, our control system (our brain) became more and more sophisticated, eventually developing the ability to construct what we might call, however loosely, an internal model of reality. From infancy onward, I have invested a huge amount of effort building up my own personal model of reality. As a baby, confronted with William James’s blooming, buzzing confusion of input from my senses, I began to notice regularities. I pattern-matched, latching onto these regularities, looking for them everywhere. I learned to have expectations based on the past, and I made educated guesses about the future. I hypothesized a reality out there, and some rules by which it operates, and together this whole collection of hypotheses allows me a tremendous amount of predictive power over my environment. This process is likely bootstrapped by a basic instinct we all have to look for patterns aggressively, to create such a reality model, the way a spider has a basic instinct to spin a web. It takes a lot of work to build and maintain this model, and I add to it and modify it every day.

I want to emphasize here just how broadly I am using the term “model” and to be extremely agnostic about how this model is implemented. Specifically, I do not want to create the impression that I think that it is some neat crystalline edifice made of linked data structures or something, all indexed and self-consistent. Like a lot of things in nature, I suspect that under the hood it is rather messy. It has gaps, and it may contain contradictions. However chaotic the model may be, it does, however, in some sense, work.

When I mention the Titanic (the actual ship, not the movie) in a conversation with you, perhaps my sense of what that is has all kinds of tendrils of connotation and association, and trivia that you do not know (and vice versa). Nevertheless, the coarse-grained relational dynamics of my model of the Titanic, vis-à-vis the rest of my reality model, correspond closely and specifically enough to their counterparts in your reality model that we can speak about the Titanic with no confusion between us. We depend on this correspondence so completely and so constantly as to not think about it.

All the thinking I do about the Titanic is done in my head. Colloquially, when I talk to you about the Titanic, we all agree that I’m talking about the ship rusting on the floor of the Atlantic Ocean. Everything I think I know, believe, or feel about the Titanic, however, is really in that reality model between my ears. This seems pedantic to point out (like the molecules-arranged-in-a-tablewise-manner vs. a table), but it is dangerously easy to forget. In particular, there are a couple of aspects of our reality model that bear emphasis.

It’s Often Wrong

The first is that our reality model is very often wrong. Or rather, some aspects of the dynamics of the reality model (the way pieces of it relate to other pieces) do not correspond to dynamics in the external world, and this might lead me to make predictions that would not come true. Every day we do our best, extrapolate, interpolate, generalize, but we always jump to conclusions, and make the best inference we can from available evidence and past experience.

Even at a pretty low level, our immediate perceptions are a best guess, based on input from the senses. This input is notoriously crappy and gappy (for more detail on just how crappy and gappy it is, see The User Illusion by Tor Nørretranders (1998)). Whatever it is we think we directly perceive, almost all of it is (re)created in our minds (Anil Seth (2021) evocatively calls this a controlled hallucination), and only a sliver of it is actually dictated by raw data from our senses. This works out for us more often than not, but very, very often our guesses and inferences turn out to be wrong.

It’s Mostly Holes

The second aspect of our reality model we should emphasize is that however much we know, there is an incomparably vaster ocean of things we don’t know. As we strive to know things, and incorporate certain types of knowledge into our reality model, we also strive to know what we don’t know. I know, for instance, that I do not know the color of the house two down from mine (without looking). I know that I do not know Richard Nixon’s birthday. I do know, however, the form such knowledge would take. Its shape is constrained, if not defined, by the outline of its absence. My lack of knowledge, and my knowledge of my lack of knowledge, depends on a ton of what we might call background knowledge, which serves to give it a shape, and hard edges. As such, even this kind of missing knowledge can play a role in my cognition, and contribute to the workings of my reality model.

Jerry Fodor made a similar point in The Elm and the Expert (1994). He said that he can’t tell a beech tree from an elm. He could easily learn, but he has never bothered to. He knows the distinction is there to be made, however, and he knows other people can tell at a glance, so he is satisfied. He can still talk about elms and beeches, without anyone legitimately accusing him of somehow failing to refer adequately.

In mathematics, there is a structure called a Menger Sponge. Take a cube, and divide it like a Rubik’s cube, into 27 smaller cubes, and remove every other one. Now you have a sort of cube with holes, and you have reduced your original cube’s total volume by roughly half. Now do the same thing to each of the remaining smaller cubes. Keep going, again and again, each time removing about half the volume of the remaining structure. As this process goes on indefinitely, you are left with something that is still clearly cubelike in its structure, but has vanishingly little actual volume. It is almost entirely void, and almost zero stuff, like cotton candy.

In a sense, our model of reality is something like the Menger Sponge, a fractal Swiss cheese. The gaps in our knowledge are vastly greater than the knowledge itself, and those gaps are both big and small, but the gaps can still lend structure to the model. That said, I suspect that in our cognitive architecture the distinction between void and stuff is not as distinct. There are inferences we feel very confident about, and others that we know are as provisional as can be, and a large range in between.

Saul Kripke and others have used the example of the Roman orator who went by the names Cicero and Tully. The fact that there are two names associated with this one man, especially one about whom most people don’t know very much, can lead to confusion. To what extent can we refer when we talk about Cicero/Tully? Can we make sense of “What if Cicero and Tully had been different men?” when we only have the vaguest sense that, in our actual world, he was some famous Roman guy?

I think we can. I do not need to know the details about Cicero to know that he existed. He was a man, and so he had friends and enemies, favorite foods and a favorite color, things he was proud and ashamed of, etc. Perhaps no one on earth today knows about all these things, but we all implicitly accept that they once were there to be known. In certain contexts, those things were the defining characteristics of Cicero, perhaps even more than his famous oratory. This stuff is all included in our blank outline of “a person,” including “a person who has been dead for a couple of thousand years.” Like the Menger Sponge, the person in our head is mostly gaps. If we hear of Tully, and do not know that he is the same man as Cicero, we have a similar blank outline for him. We do not have to personally have access to a bundle of properties or descriptions of him, or even a single defining characteristic, to have this boilerplate template in our heads, or to talk about him.

So if we have two different blank outlines, or almost blank outlines, or perhaps patchy, partially filled-in outlines, of Cicero and Tully, then someone informs us that they are the same person, what do we do? We just have to deploy our well-practiced skill of correcting our reality model, and merge these two ghosts into one. Sometimes this reconciliation entails throwing out some of the inferences we had made to flesh out one or the other of the outlines.

Epistemology deals a lot in variations on this theme: we mistakenly think we have two things, we create empty (or nearly empty) placeholders for each of them in our reality model, tentatively fill them in as best we can, and then discover we should merge the two. What are we picking out, really, when we refer to one or the other of the placeholder outlines, and how does that all fit together with whatever we are referring to once it turns out that the two are really one? I have already talked a lot about both picking out and turning out, so for now I’d just like to point out that this two-turns-out-to-be-one phenomenon is just a special case of the more general common problem of wrongness in the model.

We Take Our Model with a Grain of Salt

And this brings me to my third point of emphasis about our reality model, along with the fact that it is often flat-out wrong, and that it is defined more by its holes than its substance. And that third aspect is that, as proprietors of our reality model, since infancy, we are completely used to dealing with these first two aspects. On a daily basis, we cope with vagueness, incompleteness, misremembered “facts,” and outright lies. We know, in our bones, that our model is provisional, and we have an ingrained sense of operational humility about this. Tinkering with it and correcting it continually is just part of the cost of doing business, improving the model where we can, when it seems worth the effort.

When we communicate with language, we know that we are doing so on the basis of these fallible reality models. I know that mine is gappy and wrong, and I know that yours is gappy and wrong, and that our gaps and wrongnesses don’t match up. But I do more or less assume that the broad strokes will line up, and allow us to communicate successfully. Most of our communication has these caveats built in as background assumptions, and these caveats, and this agnosticism about the fine details, lend our language a certain vagueness. There is no problem of vagueness in reference, because vagueness is our stock in trade.

Internalism Isn’t Right, Exactly, Except It Kind of Is

This all seems to be nudging us in an internalist direction when it comes to reference (whenever we talk or think, we are doing so about our own internal data structures, when it comes right down to it). As I’ve said before, it isn’t so much the case that internalism is right and externalism is wrong, so much as a question of: why would you want to talk that way? Is there some advantage in terms of truth, economy, or insight from characterizing meaning internalistically or externalistically? The externalist about reference and meaning says that my thoughts about the Titanic are, in some fundamental sense, really directly about the Titanic itself. Here is a thought about the Titanic, here in my head, and there’s the ship out there in the world, and the thought refers to the ship. Reference is obviously some kind of connection between the thought and the outside world.

That is certainly how we speak in everyday conversation. We are all externalists in practice. Nevertheless, the internalist says, if you want to get picky about it, my thoughts are really about each other, but some of them have the actual Titanic on the bottom of the ocean as a causal antecedent, with the chain of causation involving my sense organs. As with the person who talks about molecules arranged in a tablewise manner, the internalist may be, strictly speaking, correct, but what a pedantic and cumbersome way of talking about the situation! What is to be gained by talking this way?

As philosophers, we get to define our terms any way we want. This is one of those times when we can either respect our pretheoretical intuitions or carve nature at the joints, as the slightly grisly cliche says. The internalist claims that our colloquial ways of talking and our everyday intuitions are just shorthand for what’s really going on, which is a little more indirect and complicated than our intuitions and usage give it credit for. The internalist thinks that we are missing something important by being naive realists about meaning and reference, that there are (or at least might be) important distinctions that we would do well to remember as we explore. The basis of the claim that we should frame things in this clunky way is that there is something unique and/or mysterious about the way our thoughts interact that gives rise to, or constitutes, this phenomenon of aboutness. Whatever this is, it is analogous to, but importantly not the same as, stuff we already understand pretty well, like computation, information processing, and the causal dynamics of physics.

In order to zero in on the stuff we need to figure out, we should not muddle it together with this analogous but different stuff. If reference is of interest to a philosopher, it has to do with the way some thoughts relate to other thoughts. “Reference” and “intentionality” and “meaning” entail some unique and interesting mental happenings, above and beyond the redness of red. Like seeing red, these mental phenomena are actual, fundamental facts of the universe, and are worth exploring. This is an important part of the puzzle of the mind, the part that will allow us to put what it’s like to see red together with what it means to think in the same big picture.

In contrast, the externalist is implicitly making the positive claim that whatever goes on in our heads is in no important way different than whatever goes on between the external world and our senses, and we can (and should!) lump it all together and call the whole mess “meaning” or “reference.” This claim is at best premature, and a stretch, and, I believe, simply wrong (because of qualia and all the other stuff I’ve been saying). Even if you don’t follow me all the way with that line of argument, we can be more precise, if a bit at odds with colloquial usage, if we construe meaning and reference in terms of the reality models between our ears, and not in terms of invisible magic meaning rays zapping throughout the universe.

If we regard reference as a “high-level” concept, or a [merely] psychological one, or a rough and ready engineering term, then fine, be an externalist. Stick to everyday usage, and you will be able to communicate with everyone else with little or no confusion. The compass needle refers to the (magnetic) north pole. In this case, though, it’s a little hard to see why philosophers get so worked up about it. I am going out on a limb here, and making a positive claim myself, albeit one that I can’t flesh out here. My claim, or speculation, is that there darn well is something unique and fundamental going on in our minds when we refer. Whatever this is, it is just as inexplicable in terms of third-person science (as currently conceived) as the redness of red. In order to lick the problem of reference and meaning, we are going to have to get awfully comfortable with qualia, incorporate it into our understanding of minds, and then see how that extends to those aspects of our mental lives that we had previously sequestered as cognitive. I am sorry to go no further now, but whatever qualia is, however deep it goes, it is undeniable, and a big piece of the puzzle. To ignore qualia as we try to nail down the basis for our strong intuitions about reference is almost certainly self-defeating.



So what bullets have we bitten, exactly? There are a few key take-aways that I would like to lay out here in blunt specificity. These are points that either don’t generally get the emphasis they deserve, or that most people don’t think about at all, or think about differently than I do. I think they all have a part to play in the final picture.

Basic Metaphysics

Taking Qualia Seriously

As a qualophile in the mold of David Chalmers, I take qualia seriously in exactly the way Daniel Dennett says we should not. I am not, however, a dualist. There is only one kind of stuff in the universe, but physics, as currently practiced, is incapable—even in principle—of describing that stuff completely. There are reasons for thinking this that have nothing to do with consciousness or qualia. All of this makes me a panpsychist (or, if you prefer, a neutral monist), something like Bertrand Russell. There is something qualitative that stands as part of the fundamental furniture of the universe, along with mass, charge, and spin. This qualitative essence is what instantiates or manifests the extrinsic, functional behaviors that our laws of physics describe so well.

I do not think that positing a causally efficacious conscious basis of physical reality means we have to violate known physical laws. Quantum mechanics already tells us that at the lowest levels, we can’t know how things behave. We can only characterize their behavior in aggregate over time. Equivalently, for a single experiment we can only give a probability. We already live in a non-deterministic universe. This, I think, gives us wiggle room to allow the basic stuff of the universe to do what it wants, within constraints. Stuart Hameroff has speculated that some kind of quantum superposition is maintained inside the tubulin microtubules in the neurons in our brains. This may or may not be true, but I am committed to the speculation that at some point, brain scientists will find some crucial mechanism that depends on some kind of “indeterminate” quantum effect (“wonder tissue,” in Dennett’s derisive terminology, or “pixie dust in the synapses” according to Patricia Churchland).

This, then, is the main bullet I want to bite, the one that even most of my fellow panpsychists are afraid to openly gnaw on. I want to make it clear that if you believe in causally efficacious qualia (as I do), then you must face up to violating the apparent causal closure of the physical universe, or claiming that qualia can push physical stuff around without violating known laws. That said, we might not have to bite a big bullet. The brain could be a pretty chaotic system, in which a tiny nudge at the right place and time could have large-scale effects which play out according to classical rules.


If you are at all sympathetic to the qualia arguments, and you think that there is something deeply mysterious about the redness of red, it should be just as disturbing that we perceive anything as a unit. We see even a stick lying on the ground all at once, end to end, in its entirety, and that’s weird.

Moreover, placing consciousness down at the quark level does not help explain human-scale consciousness if the only way to scale up is with extrinsic causal dynamics. The billiard balls may be conscious, but if the only way they interact or scale is through the same bonking they would do if they weren’t conscious, the bonking alone can’t explain the redness of red. No, consciousness is at the bottom layers of reality, and it scales as such to exist and do macroscopic stuff that the causal bonking would not do if left to its own devices. This does not mean that the Standard Model or Core Theory of physics is wrong, just that it leaves some details out. Brains are special systems that have evolved to exploit these details.

In the same way that I think that we should take qualia seriously, and believe that the fact of qualia has metaphysical implications, I believe that the unity of a percept is deeply strange to our usual way of thinking about how the universe is put together. My consciousness, as I experience it, must be a Fundamental Thing, and not just made of smaller Fundamental Things. Once again, quantum mechanics probably comes into play, since it allows for what William Seager calls large simples: things with potentially complicated behavior that are inherent wholes, and not merely aggregates of smaller things.

Each of our unitary, qualitative thoughts and percepts must be manifested physically as something objectively unitary itself, and that thing has causal latitude. Specifically, the behavior of large simples does not supervene on that of their parts, since they don’t have any. They may not get to violate existing physical laws, but they may have more elbow room to act than something that was a mere aggregate of smaller things. This solves panpsychism’s oft-cited combination problem by fiat, as it were. Each large simple is ontologically unique and primitive, which leaves us with a pretty extravagant picture of the universe, but c’est la vie. We prefer parsimony in our laws of nature, but nature does not owe us anything in this regard. The promise of this kind of large-scale unity is, perhaps, more important to me than the more often cited indeterminacy of quantum mechanics.


The holism of our thoughts and percepts is deeply mysterious in general, but it is especially weird that we perceive/conceive of things as unitary even when they are smeared out over time (the notion of motion). As William James said, the thought of succession is just a completely different kind of thing than a succession of thoughts. As with the redness of red, “Oh, but that’s just an illusion” is a weak, if not incoherent, response. We just couldn’t perceive time, as time, in no time. Very little can actually happen in 0.000… seconds.

Like qualia and the unity of our percepts, our direct perception of time also tells us something important. There is some funky way in which consciousness is smeared out over time to various extents for various subjects, like William James’s saddleback “specious present.” This does not mean that there is any such thing as backward causation (what could that term even mean, really?), or that somehow consciousness can see the future or reach into the past. Nevertheless, there is some way in which consciousness (or moments of consciousness) can span time, and I wonder what the limits of this span are. The notion that there is a durationless point called “the present” is an abstraction foisted upon us by calculus, among other disciplines. In real life, time does not come in points or infinitesimal slices.

Structure and Relation, Phenomenologized

People who pooh-pooh the mystery of qualia tend to try to sneak a lot of magic into words like “belief,” “reference,” and “meaning,” but they are all just as inexplicable as the redness of red. If these kinds of terms (about which the Anglophone philosophic community has obsessed for generations now) are worthy of philosophical study at all, we must take qualitative consciousness as a given, and explain things like reference on the basis of consciousness rather than the other way around.

Sensory qualia (the redness of red, the taste of salt) are just the tip of the iceberg. Everything we are aware of in our minds is qualitative, and just as mysterious. All the “cognition,” even the driest, most factual knowledge, is made of the same stuff in our minds as the redness of red. Our minds are not cognitive machines painted with a qualitative layer, nor are they cognitive machines bolted onto a qualitative base. They are qualitative through and through. We should not be led astray by the fact that we have invented machines that are “purely cognitive” that seem to emulate some of the functions of minds. It is a mystery that salt seems salty to us, but it is no less of a mystery that anything at all seems like anything to us. Reductive materialists fail to appreciate just how little comprehension you can build with those billiard balls, even when you have a lot of them. 2 + 2 = 4 is a quale. Structure and relation, in our minds, are themselves qualia as much as the redness of red. The easy problems are hard too.

Once the reductionist has broken the world down, he has a hard time putting it back together again, as the saying goes. In a universe made of almost unimaginably blind, stupid, amnesiac tiny billiard balls bonking this way and that, in which there are no efficacious levels but the very bottom-most one, things like “structure” and “relation” are only ideas in our heads. Similarly with notions like “algorithm,” or “if…then…”. Unlike a Universal Turing Machine, we can step outside an algorithm and see it from above, as it were. This follows naturally from the preceding points—that is, the holistic unity of our qualitative percepts—including unity over time. We see algorithms, processes, and sequences all-at-once, as a thing. We don’t have to execute the code to think about it and comprehend it. This intrigues me. Minds are strangely good at turning processes into things.

My Favorite Model: Pandemonium

I keep coming back to some variation of this basic idea. Our minds are hives, or Darwinian memescapes, populated by what Daniel Dennett calls demons. As William James said, the thoughts are the thinkers. There is no sharp line you can draw between CPU and memory. We don’t apply thoughts, they apply themselves. These demons are not just memories, although they can be that too. They do things, they are active, and whatever they do, whether they compete or cooperate, a lot of them are active at the same time. As Dennett says, what we take as our linear, computer-like mind is really something of a simulation, implemented on a massively parallel substrate.

Individual demons are punished for overactivation, most likely by simply getting tuned out by the other demons. There is a risk/reward trade-off as they decide if, when, how assertively, and how specifically they self-deploy. Unlike the Pandemonium model as Dennett describes it, however, I suspect that the demons are qualitative. There is a what-it-is-like for all of them, but whatever it is that we think of as ourselves is not necessarily patched into each of them. We each contain multitudes. The unified, continuous self, as we normally think of ourselves as being, is a useful fiction, a sort of virtual avatar, a me‑model at the center of my world‑model. Each demon may be considered a subject in terms of its being smeared out over a specious present, a moment of time.

As the demons do their work, they engage in lots of feedback loops, a lot of iteration, on the way to forming anything we might describe as a stable thought or percept. As percepts are built up, at the same time they are being broken down and then built up again with the pieces. Thoughts form in our heads, with this riot of demons trying this and then that, before settling on some kind of stable percept or concept. I suspect that quantum superposition is involved somehow, allowing for exploring a combinatorially explosive web of potential paths.

This memescape/ecosystem Darwinian analogy has limits and leaves a bunch of questions unanswered. I don’t know how demons cooperate or coalesce. Do they form some kind of union, then stick together from then on, or do they separate, but maintain some tendril of connection? Or do they reproduce, giving rise to a whole new demon, who then may maintain connections with its parents? Do demons really persist over time, or do they constantly regenerate themselves? Do they at least partially define themselves as deltas from other demons, or coalitions of demons? In general, we need to nail down the individuation criteria for demons. We also need to explore more about the qualitative nature of the demons, how the qualia (not just the redness of red, but the perception of process, the perception of parts and wholes simultaneously, and all the rest of it) play into the more purely cognitive Pandemonium model. Somehow the demons, and what they do, are their phenomenology.

An accurate model of our cognitive architecture, one that properly accounts for the qualitative aspects of our mental lives, will end up clarifying or dissolving long standing questions about reference, representation, and meaning. I suspect that in our minds, when A represents B, both A and B are demons, and they may engage in a sort of Lennon/McCartney cooperative/competitive interaction to resolve the reference to an appropriate extent, along appropriate lines, on terms that are acceptable to both A and B, in order to produce a thought or percept (which may itself be an entirely new demon, incorporating A and B, or spawned by them).

My Teeth Hurt

Can we stop biting bullets now? Maybe, but I am the first to admit that there are a lot of details to be worked out at the very least. How should we move forward, beyond the undeniable and encouraging progress of the natural sciences, and neuroscience in particular? As philosophers, I think we need to think in more qualitative terms broadly, to look for the ways in which the quantitative is qualitative.

This is where epistemology and ontology meet head-on. What is the actual stuff out there in the universe that constitutes what we know and how we know it? The facts we know, the beliefs we hold, and the cognition we instantiate when we think are qualitative, and as such we have no idea what they are made of and how that stuff works. We have to figure out how the qualitative aspect plays into the information-processing aspect of our cognition, and we have to figure out how we can get qualia to stop being amorphous blobs of seeing red and feeling pain, and start to stack like Lego blocks.

And we will have to entertain some wacky metaphysics. Some form of panpsychism must be true. I suspect that the nature of time is involved somehow. Hard science is a bit agnostic about what time is and how it works, and I believe we have first-person evidence that time and phenomenal consciousness play together pretty closely. As Horgan and Tienson (2002) put it, experience is not of instants; experience is temporally thick. There is also a ton of thought left to be done about will and perception and the relation between the two; the phenomenon of attention will be key here. And let’s not forget memory, a much more central mystery than it is generally given credit for.

Exciting times.


Albert, D. (1992). Quantum Mechanics And Experience, Harvard.
Becker, A. (2018). What Is Real? The Unfinished Quest For The Meaning Of Quantum Physics, Basic Books.
Block, N. (1995). “The Mind as the Software of the Brain”, in D. Osherson, L. Gleitman, S. Kosslyn, E. Smith, and S. Sternberg (eds.), An Invitation to Cognitive Science, MIT Press.
Block, N. (2002). “Concepts of Consciousness”, in D. Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings, pp. 206–218, Oxford University Press.
Bollands, A. (2020). Life, the Universe and Consciousness, Bollands Publishing.
Carroll, S. (2019). Something Deeply Hidden: Quantum Worlds and the Emergence of Spacetime, Dutton.
Chalmers, D. (1996). The Conscious Mind, Oxford University Press.
Chalmers, D. (2006). “Two-Dimensional Semantics”, in E. Lepore and B. Smith (eds.), The Oxford Handbook of Philosophy of Language, Oxford University Press.
Cisek, P. (2009). “Reclaiming Cognition”, Journal of Consciousness Studies, 6 (Nov/Dec 2009): pp. 125–142.
Dainton, B. (2000). Stream of Consciousness, Routledge.
Dennett, D. (1991). Consciousness Explained, Little, Brown.
Edwards, J. (2006). How Many People are There in My Head? And in Hers?: An Exploration of Single Cell Consciousness, Imprint Academic.
Fodor, J. (1994). The Elm and the Expert, MIT Press.
Frankish, K. (ed.) (2017). Illusionism as a Theory of Consciousness, Imprint Academic.
Frankish, K. (2022). “What Is Illusionism?”, author’s preprint of an article forthcoming in a special issue of Klēsis Revue Philosophique, available at his web site:
Gleick, J. (1987). Chaos: Making A New Science, Viking Penguin.
Goff, P. (2017). Consciousness and Fundamental Reality, Oxford University Press.
Goff, P. (2019). Galileo’s Error: Foundations for a New Science of Consciousness, Vintage Books.
Hoel, E. (2023). The World Behind the World: Consciousness, Free Will, and the Limits of Science, Avid Reader Press.
Horgan, T., and Tienson, J. (2002). “The Intentionality of Phenomenology and the Phenomenology of Intentionality”, in D. Chalmers (ed.), Philosophy of Mind, Classical and Contemporary Readings, pp. 520–533, Oxford University Press.
Jackson, F. (1986). “What Mary Didn’t Know”, Journal of Philosophy, 83: pp. 291–295.
James, W. (1952). The Principles of Psychology, Encyclopedia Britanica Inc.
Kelly, S. (2005). “The Puzzle of Temporal Experience”, in A. Brook and K. Akins (eds.), Cognition and the Brain: The Philosophy and Neuroscience Movement, Cambridge University Press.
Kripke, S. (1972). Naming and Necessity, Harvard University Press.
Lewis, C. S. (1955). Surprised By Joy, Harcourt.
Lycan, W. (2008). Philosophy of Language, Routlegde.
Maudlin, T. (2011). Quantum Non-Locality and Relativity: Metaphysical Intimations of Modern Physics, Wiley-Blackwell.
Metzinger, T. (2003). Being No One: The Self-Model Theory of Subjectivity, MIT Press.
Minsky, M. (1985). The Society of Mind, Simon and Schuster.
Nagel, T. (1974). “What Is It Like to Be a Bat?”, Philosophical Review, 83: pp. 435–450.
Nørretranders, T. (1998). The User Illusion: Cutting Consciousness Down to Size, Viking Penguin.
O’Hara, K., and Scutt, Y. (1996). “There Is No Hard Problem of Consciousness”, Journal of Consciousness Studies, 3 (4): pp. 290–302.
Penrose, R. (1989). The Emperor’s New Mind, Oxford University Press.
Price, H. (1997). Time’s Arrow and Archimedes’ Point: New Directions for the Physics of Time, Oxford University Press.
Putnam, H. (1975). “The Meaning of ‘Meaning’”, in K. Gunderson (ed.), Language, Mind, and Knowledge, University of Minnesota Press.
Ramachandran, V. S., and Rogers-Ramachandran, D. (2009). “Two Eyes, Two Views”, Scientific American Mind (Sept–Oct 2009): pp. 22–24.
Roelofs, L. (2019). Combining Minds: How to Think about Composite Subjectivity, Oxford University Press.
Rosenberg, G. (1998). On the Intrinsic Nature of the Physical (presented at Tucson III: Toward a Science of Consciousness, Tucson, Arizona, April 29, 1998).
Rosenberg, G. (2004). A Place for Consciousness: Probing the Deep Structure of the Natural World, Oxford University Press.
Russell B. (1954). The Analysis of Matter, Dover Publications.
Seager, W. (1995). “Consciousness, Information and Panpsychism”, Journal of Consciousness Studies, 2 (3): pp. 272–288.
Seth, A. (2021). Being You: A New Science of Consciousness, Faber & Faber.
Seth, A. (2022). “The Real Problem(s) with Panpsychism”, in P. Goff and A. Moran (eds.), Is Consciousness Everywhere? Essays on Panpsychism, pp. 52–64, Imprint Academic.
Shannon, C. (1948). “A Mathematical Theory of Communication”, The Bell System Technical Journal, 27: pp. 379–423, 623–656.
Siewert, C. (2011). “Phenomenal Thought”, in T. Bayne and M. Montague (eds.), Cognitive Phenomenology, pp. 236–267, Oxford University Press.
Silberstein, M. (2001). “Converging On Emergence”, Journal of Consciousness Studies, 8 (9–10): pp. 61–98.
Strawson, G. (1997). “The Self”, Journal of Consciousness Studies, 4 (4–5): pp. 405–428.
Strawson, G. (2009). Selves, Oxford University Press.
Thompson, D. (1990). “The Phenomenology of Internal Time-Consciousness” (
Williams, D. C. (1951). “The Myth of Passage” Journal of Philosophy, 48: pp. 457–472. Reprinted in R. Gale (ed.) (1968) The Philosophy of Time, pp. 98–116, Prometheus.

About the Author

John Gregg I was born in 1965, grew up in New England, and now live in a suburb north of Boston, Massachusetts, in the USA. I worked for decades as a computer programmer, and once wrote a surprisingly accessible and entertaining book about Boolean algebra, the formalization of propositional logic used inside computer chips ( Ones and Zeros: Understanding Boolean Algebra, Digital Circuits, and the Logic of Sets (1998)).

I have a bachelor’s degree in Computer Science, and no further degree. In college, I became interested in artificial intelligence. Like many student programmers, with the hubris often found in undergraduates, I thought that I should be able to program a computer to think. I no longer think that I will ever write such a program. Now I’d settle for a good essay about how the mind works or, barring that, an essay about exactly why we will never know. Some time ago I decided that the problem of consciousness is a deeper and more interesting problem than that of functionally realizing AI. Further, I now suspect that we will have to understand consciousness before we have a realistic shot at AI in the first place.

I have a web site,, that I hope to maintain forever, although it is not exclusively about my philosophical work (it’s more of a general personal site). Nevertheless, it is probably a good place to go for updates or further work along these lines.