Biting The Bullet Of Consciousness:
Easy Problems Made Hard

by John Gregg

Copyright © John Gregg, all rights reserved
















For Audrey and Gina















Structural Outline

Quick Hits: the short version in bite-sized morsels


Table Of Contents

The Hard Problem of Consciousness

Goals, Non-Goals, And Ground Rules

Physicalism: Are We Really Living In A Material World?

Epiphenomenalism: So Even If Consciousness Is Real, What Could It Possibly Do?

Ned Block's Turing Test Beater

Functionalism: Can't We Just Say That Consciousness Depends On The Higher-Level Organization Of The System?

Reductionism and Emergence: What Kinds Of Things Are There, Really?

The All-At-Onceness of Conscious Experience

Time Consciousness and The Specious Present

Free Will

Panpsychism's Combination Problem

The Self

Daniel Dennett

Beyond the Cartesian Theater: More Better Models and Metaphors

Cognitive Qualia

Knowledge

Doesn't It All Just Come Down To Information?

The Reality Between Our Ears

Reference: Picking Out

Reference: Turning Out

Future Directions

What Bullet Have We Bitten, Exactly?


References

About The Author



John Gregg
email: john <at> jrg3.net
Follow @JohnRGregg3

The Hard Problem Of Consciousness

In his book, The Conscious Mind, (1996) David Chalmers popularized the distinction between the "easy problems" of cognition (ability to reason, remember, evaluate, report on internal states, etc.) which might be understood in the next century or two, and the "hard problem" of subjective consciousness. The hard problem is hard because it just does not seem amenable to the sort of analysis that modern science knows how to do. The hard problem refers to the fact that you will never be able to tell me a story about information processing, computation, biochemistry, or about anything based on physics as currently construed, which will come close to explaining why red looks red to me, or why middle C sounds like middle C. These basic ineffable sensations are called qualia (singular quale) in the literature of philosophy of mind. Subjective consciousness itself is sometimes characterized at the most basic what-it-is-like to be you or to have some sensation or another.

Descartes thought that there were two kinds of stuff in the universe: physical stuff and mind stuff. For this reason, he has forever been called a dualist. In modern times, people who wonder seriously about qualia are also called dualists, even though many of them explicitly reject the idea of there being two fundamental kinds of stuff. This misleading labeling is unfortunate. Philosophy is confusing enough without calling things by incorrect names. Moreover, in recent centuries pureblooded dualists have been spotted in the wild very rarely, and the term is usually used somewhat pejoratively: people accuse other people of harboring dualist sympathies more than anyone embraces the term for themselves. For these reasons, I will use the term qualophile to describe Chalmers and his ilk.

The Objectivity Of The Subjective

We are taught that the entire universe and everything in it is made up of atoms and molecules and photons and things like that, all interacting according to the laws of physics. The claim of the Hard Problem is that a) the redness of red as it appears to me is an absolute, objective fact of the universe, and b) that no account of atoms and molecules interacting, no matter the complexity of their interactions, will predict or explain the redness of red as it appears to me. A robot might claim to see red, and it might do so in very convincing terms. It might represent red in some sophisticated way to an internal self-model in a way that mimicked some neural or informational events in our own brains as we see red. Nevertheless, we have no principled reason to believe that it really is experiencing red the way we do.

People have come up with clever thought experiments to help sceptics arrive at the conclusion that the Hard Problem exists and that we should take it seriously. One of the most famous is the one invented by Frank Jackson (1986) in his essay, "What Mary Didn't Know".

Mary In Her Black And White Room

Imagine Mary, a supergenius particle physicist/neuroscientist, in a future world in which our understanding of physics and neurobiology is complete and perfect. She understands and has mapped out every single neural pathway, electro-chemical reaction and quantum wiggle in her own brain. Mary, however, has been raised in an entirely black and white environment. She has never seen anything red, for instance. She knows exactly what the physics of photons of red light are, and she can predict exactly how she would react behaviorally if she did see something red, but she has never actually experienced it directly. If you have ever debugged a computer program in C, for example, using a debugger, in which you single-step through your code line by line, you may get a sense of the way in which Mary understands her own predicted reaction to seeing a red apple. She can "walk through the code" perfectly, but she has never experienced red.

Now imagine that Mary gets let out of her black and white room, and sees a red apple. For all her abstract knowledge, perfect and complete as it was, something entirely new happens in her head when she sees that apple. A lot of people argue about whether this new experience constitutes new knowledge or a new ability, but this is just talking about words, i.e. how do you define the words "knowledge" and "ability", and is much less interesting than whatever is happening to Mary. (For a certain strain of analytic philosophy in the 20th century, all roads lead to epistemology - all questions were phrased in terms of knowledge.)

The point here is that if you think of the brain as a big information processor, even being as generous as your wildest dreams will let you in terms of its sheer processing capacity, future physics, etc. you still leave something out. The information processor does not see red. It counts pixel values on its retinal grid, it accesses memory locations, it does data smoothing and runs comparisons, but it does not have subjective experience. Perhaps when thought of in a certain way, from the point of view of a certain level of abstraction (projected onto the system by the observer), the information processor may be seen as seeing red, but there is no reason to believe - none in the world - that it really is seeing red, objectively, the way I (and presumably you) do.

Nagel's Bat

Another illustrative example comes from Thomas Nagel's (1974) essay, "What Is It Like To Be A Bat?". Bats employ a sonar-like echolocation trick to find bugs in the air. The claim is that there is nothing you could possibly ever know about how a bat's brain, ears, and vocal system works that would let you know what it is like to sense a moth 20 feet away; kind of like hearing, but not really, kind of like touching with a long arm, but not really.

Similarly, I have read that bees see colors that we can not see. What do those colors look like? We could know everything about bee brains and bee eyes, how the bees react to those colors and why, how the ability to see those extra colors evolved, etc. and we would still never know personally what those colors look like. If all mental activity is information processing, how is it that we could have all the explicit, articulatable information about bee perception but still not know something about it? Couldn't we, with our far superior brains, crunch through the bee color perception algorithm? Couldn't we "walk through the code"? Most people would agree that such an exercise would not deliver a sense of what bee colors actually looked like to the bee.

The arguments about the inability of information processing or physical theories to explain subjective consciousness apply to the human brain itself. Just as the silicon, flipping bits, will never see red, we have no principled reason to derive the fact of our seeing red from the bit flipping in our own neurons.

Zombies

This point is illustrated by another thought experiment, that of the notion of a zombie. A zombie, in this context, is basically a person who has no phenomenal consciousness, that is, who experiences no qualia, but whose brain and cognitive machinery otherwise works just fine. A zombie has the same neural connections that you do, acts and talks like a normal person, but is "blank inside". A zombie brain essentially is a human brain, but considered only as an information processor. Note that a zombie would claim to see red, and seem to fall in love, and would in fact do all the things with its brain that we do with ours, producing all the same reactions, except that it would not be like anything to be the zombie.

The zombie thought experiment is controversial. There are some people who think that the whole notion of zombies is incoherent. If something talks, thinks (if by "thinking" we mean only the sort of processing that could be modeled on a computer, the pure information processing manifested in us by our neural firings), and acts like a conscious person, then that entity is conscious, full stop. To speculate about the conceivability of something that talks, thinks (in the limited way mentioned above) and acts like a person but is not conscious is like speculating on the conceivability of married bachelors. There is nothing extra about consciousness besides the functional mechanisms of information processing, and any claims to the contrary are just spooky mumbo-jumbo, the products of sloppy thinking. To them, it is as if I hypothesized an atom-for-atom copy of a water fountain, one that behaved exactly like the original water fountain, but just wasn't, you know, a water fountain.

Zombies make sense to me, though. Given our current understanding of brains, there is nothing inconsistent about the idea of a brain that works exactly as mine does now, producing the same output responses to the same input stimuli, and employing the same neural mechanisms, but which skips the phenomenal conscious part. The idea is essentially the same as that of the black-and-white Mary scenario. We do not have any principled, theoretical way (other than brute correlation at a higher level than we generally like our brute correlations) to get from a complete description of how the parts of the brain function to the fact of subjective consciousness. A failure of prediction of this sort is a sign that your science is incomplete at best, and quite possibly seriously flawed. With regard to the Hard Problem, this failure of entailment from the facts about brain processing to the facts about consciousness has been called the explanatory gap.

While it is often hard to draw a distinct line between qualia and cognitive, functional information processing (a fact I believe is underexplored, more on this later), there is something going on when I see red that is in principle unexplainable by any theory of mentation that allows for minds being implemented by computers. It stands as an extra fact about the universe that demands explanation. To define consciousness as the functional information processing is to define away the real mystery of consciousness.

Frankly, I suspect that zombies, in the strictest sense, are impossible. My hunch is that if you could copy me, molecule for molecule, what you would wind up with would be conscious, but for reasons that aren't even approached by our present-day science. Thus it is an indictment of current science that zombies are consistent with everything we know, even though they may someday turn out to be impossible in practice.

Not Everybody Likes The Hard Problem

I think it is fair to say that qualophilia is still a minority position. The mainstream orthodoxy, such as it is in these circles, is…the other folks. There are a lot of people who think that all this qualia talk is nonsense, or at least misguided: even if whatever it is we call "qualia" is real, it can be explained with "normal" physics, information processing, etc. and has no broader implications for our picture of what the world is made of or how it is put together. What should we call these people? Since I've called their opponents qualophiles, perhaps they should be qualophobes? I'm going to bow to convention in this case, though, and just call them physicalists (although I will use the term materialist somewhat interchangably). Even this is a little misleading, or at least vague. It does justice to the idea that "it's all just physics", but it leaves open what we mean by that. Lots of qualophiles might agree that the universe is just made of physics - it's just that physics is a lot bigger than you think.

Physicalists generally like to characterize the belief that consciousness can not be reductively explained within present-day science as mystical mushy-headed wishful thinking. Sometimes they sneer openly ("Away, into the dust-bin of History!"), other times they are more polite (and patronizing: "Come on in - the water's fine! Don't be afraid to give up your quaint superstitions and your foolish vanities"). But nearly all of them at some point or another in their writings betray a certainty that anyone who believes that there is something deeply mysterious about consciousness is McCoy to their Spock: irrational, scared and desperate to hold onto the transcendent specialness of human beings, logic and science be damned. The reductive physicalists want to be the Grinch, standing on the side of Mt. Crumpit, with an ear cocked toward Whoville, hearing the Whos cry, "Boo hoo hoo, he stole our souls!".

There are, of course, people who really do want to cling to the belief in their souls at any cost to reason. But to imagine that all people who accept the Hard Problem are motivated by this desire is to indulge in kicking a straw man around, and an invitation to complacency and dogmatism. As an undergraduate atheist computer programmer, I was a physicalist. I wanted nothing more than to prove once and for all that minds really were just computers, and let humanity put that in its collective pipe and smoke it. I wanted to be the Grinch. I actually wondered (with some satisfaction) what sort of spin the Catholic church would try to put on an example of true artificial intelligence. Would some people get depressed? Would some commit suicide? Or would people, by and large, be mature enough to take it in stride and think it was a fascinating advance? Ultimately I was dragged kicking and screaming to the view that the mind can not be reduced to mere information processing2.

I do not usually put a lot of stock in sociology of science, nor do I like to emphasize the cultural aspects of scientific endeavor, but what science is, its proper aims and methods, is a lot less monolithic than most people believe. We must be open minded as we consider the kinds of methods we might have to use to explore whatever facts about the world Nature sees fit to present us with. Each scientific revolution (or, as the cool kids say, "paradigm shift") leaves us perfectly equipped to ask those questions that have just been answered. The fact that we don't know how to properly frame certain questions now is not an argument that the questions themselves are wrong - quite the contrary. It is the questions that we aren't sure even how to ask that should interest us the most. We should watch out for the hubris of thinking that even if our particular scientific theories are incomplete, our ways of framing them, and our criteria for what things are worthy of scientific consideration, and the form we like our answers to take, are complete and perfect. We should not fall into the trap of thinking that if someone can't quite pose their question in terms that your intellectual framework is designed to accommodate that this means their question is automatically silly.

Science does not progress by sweeping things under the rug which do not fit conveniently into the established order. In fact, in any scientific era, the science of the day seems complete and perfect, except for one or two minor anomalies. It is these little anomalies that end up bringing down the entire edifice. Further, every time there is a true scientific revolution, not only are the existing theories overturned in favor of new ones, but inevitably the old methods and criteria for what constitutes a good theory are revised as well, often radically. People who resist the Hard Problem because it has no meaning within the bounds of third person, mathmaticized, objective scientific exploration are making a dogma of their methodology.

Is Consciousness Like Elan Vital?

Sometimes reductive physicalists compare belief that the Hard Problem is hard to vitalism of centuries past. This was the belief that there was some mysterious elan vital, a life force that animated living things beyond the mere mechanisms of locomotion, eating, reproduction, etc. The more we found out about how life worked at a molecular level, however, the less anyone believed in an elan vital. Belief in vitalism was ultimately exposed as a failure to appreciate how beautifully complex and exquisitely specific the mechanisms of life were. Once one understood the mechanisms, however, there was nothing left to explain. Similarly, argue the reductive physicalists, once we understand enough of the cognitive mechanisms of the brain, the Hard Problem will melt away into the details.

The problem is that subjective consciousness (or qualia) is not something we drag into the picture to explain something or other that we observe, as elan vital was invoked to explain what we observe about life, or to use another example reductive physicalists like, as the luminiferous ether was invoked to explain light waves in space in the 19th century. Consciousness is the raw data, the observed thing that needs explaining. It is the light, not the luminiferous ether.

Is Consciousness An Illusion?

Some people argue that what I call subjective consciousness is some kind of illusion. But what is an illusion? It is something that seems one way but is really another. My claims rest on the observation that that red really seems red to me. The counter claim that this is an illusion boils down to, "red doesn't really seem red, it only seems that it seems red." But seeming, like multiplying by 1, is idempotent - inserting more "seeming" clauses into my claim does not change it one bit. Whether red seems red, or seems that it seems that it seems that it seems … red, the Hard Problem stands before us. The Hard Problem consists of the fact that anything seems like anything at all. If subjective consciousness is an illusion, then who or what exactly is the victim of that illusion, and how can it be such a victim without the Hard Problem being a problem for it? There is a fundamental bootstrapping problem. There simply is no basis for anything to seem like anything to anything, or anything with which to build any seeming, in a world made of utterly blind, stupid, amnesiac particles.

Keith Frankish is a proponent of what he calls Illusionism, which basically says exactly this: consciousness, as I characterize it, at least insofar as it is mysterious and most interesting, is an illusion. His account of how the mind works this illusion on itself resembles a lot of higher order thought ideas. He claims that while a lot of fancy brain processing goes on under the hood (the lower order thoughts), the mind represents all this to itself as some kind of deeply mysterious, fundamental, ineffable, qualitative experience. It is this representation to itself that I think is analogous to the higher order thought, and which Frankish says is actually a misrepresentation, albeit a very convincing one, for perfectly good evolutionary reasons.

As with a lot of physicalist arguments, I think "representation" is doing a lot of work here. If you define your processing and processors and modules functionally, causally, there is no representation or misrepresentation. Things just clatter along, doing what they do. To say that some part of the mind is the victim of a misrepresentation is fanciful and poetic language. Qualia does not seem ineffable to such a system, because nothing seems like anything. If your account of consciousness rests on A (mis)representing B, you better have an ironclad account of representation in the first place. Personally, I don't have such an account, or at least one that leaves "representation" with any explanatory power whatsoever. But more on this later.

Is Qualophilia A Failure Of Imagination?

It is sometimes said that taking the Hard Problem seriously is a simple failure of imagination: the fact that I could not imagine traditional science (neurobiology, information theory, physics) explaining what it is like to see red says a lot more about my powers of imagination than it does about the actual limitations of traditional science. In the same way, it is argued, a vitalist's inability to imagine life being nothing more than molecular processes simply proved to be a failure on the vitalist's part to appreciate just how complex and tiny the molecular processes are. The vitalist's scepticism, however, ultimately came down to a matter of scale and complexity - the vitalists did not properly appreciate that the components of life could be quite that small or that complex. Claiming that more scale and complexity will turn ones and zeros (or their effective equivalents) into red makes no sense.

The fundamental components of the world to a physicalist are completely blind to one another, and completely stupid, and have no memory whatsoever. They are basic particles, and they just careen in one direction, then another. Even when they attract, repel, or collide with each other, they don't really "see" or "know" about each other - they just careen (with an occasional bonk). They don't know why, or what it is that is influencing them to careen in this particular direction at this particular speed. It sounds funny even to say it this way, but I think some people do not really sense in their guts just how blind, just how stupid, just how little memory the fundamental particles must have to a committed physicalist. To get anything not blind and not stupid out of them, you must attribute a lot of power to the notion of "levels of organization". You can't get blood from that particular stone, however. The blind and stupid stay blind and stupid, and utterly oblivious to any "levels of organization" no matter how many you put in a room or how they are arranged.

My accusation to physicalists is that they do not follow through on their own commitments in a rigorous and thorough way. They claim to be strict vegans about woo - qualitative subjective intuitions - but they help themselves to generous portions when it suits them. They like to frame theories of consciousness in terms that they do not define, but that draw on a whole lot of pretheoretical intuitions that just aren't there in the bits, bytes, quarks, and photons. We hear, for example, a lot about systems representing stuff (perhaps including a self-model) to themselves, you see mentions of integrated systems, the system as a whole and the like.

It strikes me, in fact, that the physicalist claim is an extravagant and unsupported one, a point which is often overlooked simply because physicalism has been the reigning orthodoxy for several centuries now. The physicalists claim that if you get enough unconscious stuff together in a big pile, and arrange the pile in a certain special way (a complex enough way, perhaps, or a pile that conforms to a certain functional schematic, or maybe Druidic runes), then poof! subjective consciousness will appear. They claim that this must be the case, because centuries of scientific advances have shown us that the reductive physicalist approach is the perfect framework for understanding the universe, so it simply must be the case that it is adequate to explain consciousness too, although they can't give us the exact details just now. I find this alchemical hypothesis at least as bizarre, spooky and mystical as anything I've ever heard. It is a leap of faith on their part, and the onus is on them to show us the money. It is foul play to try to shift the burden of proof back on the qualophiles, claiming that scepticism of the reductive physicalist position betrays some kind of failure of imagination.

Moreover, it is not a failure of imagination that leads me to take the Hard Problem seriously. On the contrary, it is because I can imagine a day not too far off (fifty years? One hundred?) on which we solve Chalmers's easy problems. On that day, cognitive science and neurobiology complete their intended programs and actually map every single event in the human brain, every information flow at any level of organization you please, every secretion and uptake of every neurotransmitter. On this day, it will be possible for us (like Mary in her black and white room) to detail everything that happens between photons striking my retina and my uttering, "What a beautiful sunset!". The cognitive scientists and neurobiologists will collect their Nobel prizes and go home satisfied, and nothing in their description of the brain will give the slightest hint of what it is like to see red, or why anything seems like anything at all. Yes, it is true that I can not imagine that day in detail, in the sense that I do not have that final theory at my finger tips down to the last synapse (otherwise I would be the one collecting the Nobel prize right now), and there's the rub, the physicalists would say. If I could see that theory in detail, they argue, it would be clear why red seems like red.

For nearly a century, mentioning consciousness was a career killer in the field of academic philosophy. In the last generation or so, however, the question of consciousness has been coming up with greater and greater urgency, and it is attracting pretty level-headed, math/science type people, not mystics, not new-agers, not religious wishful thinkers. I think this is so precisely for the reasons that I mentioned above: as science progresses, and closes in on its stated goals regarding our brains, its limitations stand out in ever sharper relief. The physical sciences, as their boundaries of inquiry are currently construed, deal only in functional behavior, externally measurable effects. There are perfectly valid questions about Nature (what is it like to see red?) that are completely outside the bounds of natural science as currently practiced. That is, it is conceivable that we could have a complete and perfect understanding of physics and all the other "hard" sciences, and have never articulated quantitatively, let alone answered, those questions. My ability to imagine this state of affairs may be incorrect in some way, but it certainly does not represent a failure of imagination on my part.

Physics and physicalism are not so much wrong (except in their claims of exclusivity) as they are incomplete. This is just the way science works. Newton invented a formal basis for a physics and for a long time it seemed dead accurate. But along comes Einstein, and it turns out that while Newton's physics was perfectly consistent and accurate within its domain, it was incomplete - it is merely a special case of a more general set of laws. Then a decade later, Einstein comes out with General Relativity, and shows that his own earlier work, while perfectly applicable within its proper domain, is really just a special case of still more general laws (hence "general" vs. "special" relativity). Science works by adding more layers to the outside of the onion. Old theories are not so often disproved by new ones as they are generalized and subsumed by them.

My seeing of red is not a philosophy; it is not a way of thinking about or interpreting some theory or idea; it is not a bit of linguistic sophistry; it is not an abstraction; it is not an inference I have drawn or some metaphysical gloss I have put over reality. It is a brute fact about the universe, a fact of Nature. It is really, really there. It is explanandum, not explanation. As such, it is incumbent upon our natural science to explain it. If my seeing of red is not amenable to the currently accepted methods of natural science, then so much the worse for those currently accepted methods. Those who deny the existence of qualitative consciousness remind me of the church officials who refused to look through Galileo's telescope because they did not want their neat and tidy theological world upset by what they might see.

So where do we go from here? We want the least weird description of what the universe would have to be like for beings like us to be in it. If there must be weirdness at all, let us confront it head-on, bracket it, constrain it, characterize it somehow that allows us to keep all the wonderful stuff we've already figured out. Loopy as it sounds, consciousness, or something that scales up to consciousness in certain kinds of systems, must be built in at the ground floor, as part of the fundamental furniture of the universe. Someday, after we have pinned it down a bit, it will stand right up there with mass, charge, and spin. This view is traditionally called panpsychism, but some people prefer pan-protopsychism to emphasize that it is not consciousness as we know it that stands as a fundamental building block of the universe, but some tiny crumb or spark that, when scaled up, aggregates into full-blown human consciousness under certain conditions or in certain types of systems. Also, "panpsychism", to some people has medieval, vitalist connotations; most contemporary panpsychists want to dissociate themselves from the belief that "rocks think". No one knows (yet) the principles according to which proto-consciousness aggregates into full-blown human consciousness, or what is so special about brains that they support this aggregation. In the range of potential answers to these questions there is room for many different versions of panpsychism, some more conservative (for lack of a better term) than others.

2 It has been suggested that philosophy would benefit if the word "mere" and its synonyms were banished from discourse.


Goals, Non-Goals, And Ground Rules

In philosophical debates, people argue a lot about defining terms and where the burden of proof lies. I think it is a good idea to lay down here the sorts of theories and explanations I'm interested in, and the kinds I think we should be looking for.

Folk Usage vs. Real Definitions

You Can't Even Define Your Terms!

For starters, sometimes physicalists hold it against qualophiles that they don't even define consciousness or qualia. I plead guilty to that. We have a mysterious phenomenon. We can point to it, and try to approach it, and start to say some things about it, or we can deny that it exists. What we can't do is define it (yet). That's just how science (or inquiry more broadly) works. Defining what you are talking about is the capstone of the pyramid, the very last thing you do. Isaac Newton said some very intelligent, perceptive, and true things about light, but he was centuries away from defining it.

While we should allow for the fact that we can't precisely define the thing we are trying to explain or understand (at least not at first), we should be as clear as possible in the terms we use in the explanation of that thing. A lot of philosophers are pretty glib in their use of terms like information, computation, symbol, represent, and even physics. If you argue that consciousness can be explained by any of those things, you had better be ready to tell me exactly what you mean by them.

Moreover, as we learn and theorize more about something we are interested in, like light or consciousness, we may be able to characterize it more precisely than we were before, when all we could do was point to it. There's a catch, though. What if we find some underlying constitution or structure of the thing we are interested in that really seems to explain a lot about it, even comes close to defining it, but does not completely line up with what we were pointing at originally? That is, our new way of characterizing the thing, with a little more theoretical basis, includes some stuff that we didn't use to think of as examples of that thing, or maybe excludes other things that we used to think of as examples of that thing.

The classic example of this is the folk conception of fish. Eventually we redefined "fish" based on internal anatomy, which in turn is based on evolutionary history. We decided that "fish" includes some very unfishlike things that scuttle along the ocean floor, but excludes whales and dolphins. The creatures that count as fish to us constitute a different set than those that would count as fish to a medieval person. As we get our theoretical feet under us, we should expect this kind of thing. We will want to redefine terms in ways that vary somewhat from our pretheoretical "folk" understandings of what those terms used to mean. This entails judgment calls as we discover and theorize: when do you nudge the definition of a term over a bit, and when have your new categories caused you to diverge so much from prior usage that you should just coin a whole new term to talk about what you mean, and leave the old term to the folk to use in everyday life?

As you make these judgment calls, there are two things you definitely should not do, however. First, you should not decide that all the ignorant pretheoretical people were wrong in their use of the term. They were happy calling whales fish. You redefined it for your own purposes, and that's fine, but they were not making an incorrect claim about the world in their "misuse" of the term fish. They just had a different definition.

Second, you should not err in the other direction, letting folk usage dictate the kinds of theories you entertain. If folk intuitions about how the world works were counted as definitive evidence against an otherwise compelling theory, we would never have figured out that the earth goes around the sun instead of the other way around. By the same token, if a philosopher decides, for example, that knowledge is justified true belief, and they have a good theory about that, it should not count against that theory that some clever person comes up with a "counterexample" that shows that the theory violates folk intuition (Gettier cases, in case you are interested). Unless, of course, your aim is a precise and elaborate articulation of folk intuition, which, following the analogy, makes you Ptolemy, not Copernicus. You are providing an elaborate reflection of peoples' pretheoretical intuitions and handing them back to them rather than figuring out what is really going on.

I have no interest in the project of writing a perfect descriptivist dictionary. When philosophizing about X, I don't want to some up with a perfectly worded, concise listing of the 17 ways in which people in the street talk about X, or even a single perfect formulation that exactly captures common usage of "X" with no remainder. For the most part, common usage is interesting insofar as it points to some actual thing or process of fact in Nature that we should be exploring.

Is A Hot Dog A Sandwich?

Philosophers love to define terms. It is said that a philosopher would rather use another philosopher's toothbrush than use their terminology. Many debates are not so much about what is true or not true, but about how we should define and use terms. For instance, philosophers worry a lot about meaning. One of the divisions in all that worry is between those who are internalists about meaning and those who are externalists about meaning (don't worry about what this means (har!). More later.) It bothers me when people phrase this as the question of whether externalism or internalism is "true". They aren't the sort of thing that is true or false. The argument comes down to how you define "meaning". Why would you define it one way or another? How much do you want to respect folk usage? How much do you want to respect some potentially counterintuitive underlying natural truth? What features of the colloquial understanding of "meaning" do you want to focus on and preserve in your final theory, and which do you consider less important?

It would be refreshing if a philosopher of meaning would start a paper by saying "This might not be the way you, dear reader, think of meaning, but for the purposes of what I would like to say, I'm going to construe "meaning" internalistically. Please bear with me as I follow this self-imposed convention. Now that I've got that bit of definitional housekeeping out of the way, here are some terrific insights, clearly phrased..." Instead we get hundreds of papers pounding the table, shouting "Internalism is true!" If you are into this kind of literature, you might know that there is an overlapping debate as to whether mental content is narrow or broad. Jeez, I don't know, it depends on how you define content, assuming we even try to do so.

These are judgment calls. My own inclination is to coin our terms in such a way as to respect the really-there things in nature, to use "element" to talk about things like hydrogen, helium, and lithium, and not air or water. Just as I would rather be Copernicus than Ptolemy when it comes to respecting pretheoretical intuitions, I'd rather be Mendeleev than Aristotle when it comes to what we call elements. We want to carve Nature at the joints conceptually, and then we want to speak as clearly as possible to convey those conceptual carvings.

Wittgenstein famously said that what we cannot speak about we must pass over in silence. My gloss on that is what we don't quite understand, we must be vague about. I play a bit fast and loose with my own terminology, and I like to think that this is not mere sloppiness on my part. Premature hair-splitting is actually harmful. It encourages us to think, wrongly, that we are at a more refined stage of our inquiry than we are. We must remind ourselves to think broadly, boldly, and openly. When it comes to consciousness, we are still painting with broad strokes, maybe with a palette knife, or even a roller. We should not be pretending to use the fine watercolor brush.

Caricatures Of Some Physicalist Arguments

Evolution

Sometimes people point out that consciousness has survival benefits: it is a way of integrating information about the world and formulating intentions and instigating actions that help us. These accounts tend to focus on Chalmers's "easy problems" and thus miss the thing about consciousness that makes it so tricky.

Imagine that Charles Darwin, on his voyage on the Beagle, came across an island in the Pacific Ocean that had a peculiar ecosystem. The island was inhabited by slow-moving, fluffy, fat rodent creatures that nibbled on the grass that grew in abundance there. The island also had sharp-toothed predators, who would lie in wait for the rodents to come by. Every time one of the toothy predators sprung, though, the rodent it was stalking would levitate up in the air and hover there, while the predator paced below. Eventually the predator would skulk away, and the rodent would gently float back down to earth.

What would Darwin say about this? He might say that the ability to levitate saved the rodent, and that the rodent species evolved this ability because it enables the them to escape being eaten by the predatory species, thus making them fitter to survive in their environment. He would have to be fantastically incurious, however, if that were all that he said. However advantageous the ability to levitate might be, and however neatly this advantage fits into his theory of natural selection, Darwin, one hopes, would immediately be moved to ask questions in an entirely different realm of inquiry. How is levitation possible in the first place?

Big Dumptruck. Really Big Dumptruck.

What if someone told you they had figured out the mystery of consciousness, at least in part. Among other things, perhaps, a big dumptruck, according to their hypothesis, would be conscious. The catch is that it would have to be a really big dumptruck, like planetary sized. At least as big as our moon, maybe bigger than the earth (haven't worked out all the variables yet). Obviously we are in no position to build such a thing, but if we could, it would definitely be conscious. If someone excitedly explained this theory to you, you might be a bit sceptical. You might express doubt, or ask questions about the details, or ask why it should be that a huge dumptruck is conscious. In response, imagine that your friend confidently told you that your imagination was too limited, that you were holding on to some ascientific prejudice or vanity, and that if you could really, really conceive of the size of this dumptruck, even you would see that it just was conscious. It would have to be. You just aren't trying hard enough. You just aren't understanding how big a dumptruck we're talking about here. Maybe you aren't applying the right concepts to the dumptruck, or considering it under the correct mode of presentation.

Size isn't the problem, and dumptrucks have nothing to do with consciousness at all. I don't have to calculate the gravitational field generated by each enormous lug nut on each continental wheel to tell you that you are not going to make a dumptruck conscious by just making it big. There is no connection between the two properties. While I acknowledge that the picture is a little more muddled when it comes to the distinction between phenomenal consciousness and information processing, this is kind of how I feel whenever someone tells me that consciousness "just is" information processing, but really, really complex information processing, or data structures arranged in a certain way, or self modifying self-models, or something like that. You just can't get there from here, and fleshing out the details won't help you. It's just not the kind of thing that could ever build up to consciousness, no matter how much of it you pile on.

Correlation vs. Entailment

What if every time I turned on my kitchen light switch, the neighbor's dog barked. Let us say that I tried this a hundred times, and each time, the dog barked, even when I got up in the middle of the night, snuck downstairs and silently turned on the light. Imagine further that I hired en electrician to follow the wiring, and they found nothing out of place. Let's say I went over to the neighbor's house and examined the spot where the dog was tied up, and even put the dog's leash and collar on myself and lay down in the dog's spot and had my sister-in-law turn on the light and felt no effect - except that the dog still barked, uncollared, standing next to me.

I might say, at this point, that the electrician missed something, and that I should pay them more money and look harder for an explanation that fit within our existing understanding of wiring and such. I might shrug and say "I tried" and turn my attention to other concerns, and put a piece of masking tape over the switch so no one ever used it again. I might even think that something profoundly spooky and mysterious is going on involving ghosts or aliens, and get out a Ouija board. Any of these responses is arguably valid.

A response that is definitely invalid, however, would be to claim that the dog barking just is the kitchen light switch being turned on. In response to this "explanation", I could propose a zombie version of the scenario: imagine a world in which I turned on the kitchen light switch, and the dog didn't bark, and only the kitchen light turned on. We could argue all night about whether that scenario is logically conceivable or metaphysically possible or vice versa, but unless you accept the claim that the dog barking just is the kitchen light switch being turned on at face value, you can't rule it out. The best you can do is to throw up your hands and say there is a brute correlation between the kitchen light being turned on and the dog barking, and we can go no further. What you cannot say is that it should have been obvious beforehand that the kitchen light switch being turned on entailed the dog barking, and we should have expected it if we thought about it in enough detail.

Just-Is

Back in medieval times, scientists ("natural philosophers") thought that heat was some kind of invisible fluid. When you placed something cold near something hot, the fluid equalized by flowing from the hot thing to the cold thing, until they both were the same lukewarm temperature. This fluid hypothesis checks out from an intuitive, folk-physics point of view. It seems to explain a lot of what we observe in the real world. Later, of course, we figured out that heat is the mean kinetic energy of molecules - that is, their average momentum. For molecules of a given mass, when they are slow, that's less momentum, and less heat. When they are fast, that's more momentum, and more heat. When you put a hot thing near a cold thing, the speedy hot molecules collide with the sluggish cold molecules, and transfer some of that momentum, and eventually everything becomes lukewarm.

The important thing here is that the molecular momentum does not give rise to heat, or produce heat, or serve as a necessary condition for heat. The molecular momentum just is heat, and heat just is molecular momentum. Every single empirical result from any experiment you could ever perform about heat is explained by this hypothesis. It is awkward and impractical to speak in terms of molecular momentum ("You want a sweater, Grandma? The average molecular momentum of the gas molecules in this room has dropped below your usual comfort threshold."). Because there are interesting and surprising (to us) things that happen when molecular momentum is transferred at large scales, we study convection, conduction, and radiation of heat as if they were forces in their own right, but no one doubts that it's all just molecular momentum. Once God nailed down the truths about molecular momentum, there was no more work to do (nor any work He could do) to come up with the "higher-level" laws of thermodynamics.

Am I Asking Too Much?

This is what we are shooting for. I want a theory that says consciousness just is X, with no remainder. Is that fair? Is this kind of austere reductionism demanding too much of my opponents, the physicalists? I think it is fair. This is how science works. It is an inherently reductive enterprise, with nothing but efficient causation, no final causation. All the causing is done from behind, all the constituting is done from below. No new properties are allowed to slip in between the layers. This is what Rutherford was getting at when he said that all science is either physics or stamp collecting.

There is nothing wrong with higher level sciences, and as a matter of practice, they have a wide-open future ahead of them, but they are, in a certain principled sense, derivative. There will always be meteorologists, and knowing a lot about physics won't give you much of a leg up when you start studying the science of weather. No one thinks that we could or should plot the trajectory of a hurricane by calculating each molecule of water and air that makes it up. Meteorology has its own ontology, its own laws, constants, and the rest of it. It is a free standing science in its own right. Nevertheless, no one doubts that, in principle, a hurricane just is all those molecules, and there is nothing going on in the dynamics of the hurricane that isn't 100% entailed by the physics of those molecules. It may be astronomically complicated in practice, but in terms of how our universe is put together, it is really quite simple. So it is with all the sciences. The explanatory gap, the failure of entailment, with regard to consciousness, should embarrass us.

I demand of the physicalists that they not engage in mushy thinking, that they be thorough in their own reductive project, that they apply their precision and rigor all the way down. We can have intermediate levels, and objects, laws and all the rest of it, but only if we remember at all times that this is a convenience to us, due to our limitations, and really just shorthand for a much more complicated story going on at the lowest levels. There is a reason physicists call their eventual theory that will unite relativity and quantum mechanics the Theory Of Everything (TOE), and not the Theory Of All The Low Level Stuff (TOATLLS). They are not shy about what they think is entailed by getting the microphysics right.

If physicalists carry out their own project with integrity, consciousness will stick out as an inexplicable problem for their picture of reality (in which case they won't be physicalists anymore), or they will be forced to fall back on eliminativism. This is ultimately the position that most of them take, however they sugar-coat it. It means they eliminate consciousness by basically denying that it exists in the Hard Problem sense. It comes down to something like "I report on red, I respond to red, but I simply don't know what you are talking about when you speak of the inherent redness of red." This, to me, is the equivalent of a little kid sticking her fingers in her ears, "La la la! I'm not listening!"

My instincts and temperament in all this are not woo woo or mystical. Quite the contrary. Ultimately, as a qualophile, I am the hardest of hardcore reductionists. I like reductionistic explanations. I want to see the reduction. Show me the money, no hand-waves allowed. Show me heat, and make it obvious that it just is average kinetic energy. Show me a hurricane, and explain that it just is water molecules, even if it is inconvenient to deal with them at that level. Show me consciousness, and make it clear, at least in principle, that it just is billiard balls banging around. If we are going to be good reductionists, when we can't do that for any given phenomenon, maybe we've hit bottom. Maybe we've got something that is already as low as we can go in our analysis, even if it seems surprisingly big, or complicated, for the kinds of things we like to think of as occupying the lowest levels in our reductive pictures of the universe.

Why I Am Optimistic

Lately, there has been a surge of interest in consciousness, and a growing acknowledgment that there is a deep, deep problem here. It is exciting to bear witness to a critical mass of smart people coalescing around a problem like this. It is a great thing to be here now. In general, I am impressed with the integrity of the inquiry so far. Almost without exception, the contemporary books and articles about consciousness I have read are written by honest people just trying to get to the heart of the problem. They use plain language, and are willing to admit what they don't know. This bodes well, I think.

People have tried to figure out consciousness for millennia. Why should we crack this nut now? Basically, we have better tools now. Maybe not good enough tools, but certainly better. Obviously neuroscience and physics have progressed since Descartes' day, but we also have some versatile conceptual tools. Along with the 20th century's explosion of information technology, there has been a great deal of rigorous thinking about computation and symbol manipulation. The closely related field of information theory has also helped us invent a language which allows us to begin to talk about ways in which the brain might work. A century or more ago, the operative conceptual model of mechanistic functioning was the steam engine. Now the operative conceptual model is the computer, which, while insidiously misleading in some ways (I think), is a step closer to the truth. At least it is more illuminating to think about why minds are not like computers than it is to think about why they are not like steam engines.

Besides the deep thought we've done about computation and information, we have also discovered quantum mechanics in the 20th century. Besides the implications of quantum physics itself (more on this later), quantum mechanics has forced us to think hard about what we are doing when we do physics, the limits of physical explanations of anything, and where physics ends and philosophy begins. So maybe we will make it over the hump this time, or maybe we will fall back, fall apart, and the problem will lie dormant for another 50 years. I can't tell. I just hope we break through in my lifetime.

Some physicalists say that the qualophiles are crazy to worry so much about consciousness, and that the ancients had the excuse that so much of the natural world was mysterious, the mind was just another mystery. Now that we know so much about neuroscience and computation and stuff like that, we have no such excuse. It is a weird contrarian anachronism that now, of all times, some perverse collection of philosophers decides that consciousness does not fit into the natural world. They're almost like flat-earthers.

I think it goes the other way. It is precisely because we know so much that this problem is rearing its head now. While we don't have all the details yet, we can see the trajectory of science, information theory, etc. and can get some sense of the outer perimeter of what they could ever tell us. We can think more clearly than ever before about the kinds of questions we can ask them and the kinds of questions they are equipped to answer. Our blind faith, scientism, is giving way to a more mature and realistic sense of the quantitative sciences as tools, incredibly well suited to some tasks, but not so much for others. Laboratory results are great, indispensable even, but the current impasse will only be resolved by a conceptual breakthrough, a shift in our way of thinking. We may have to expand what we think of as science, its proper aims and methods, in a way that does not throw the baby out with the bathwater. We stand now at one of those rare moments in history in which philosophers may actually contribute something useful.

If we were to conduct a little office pool, I'd give it several decades. The state of the field of consciousness studies is somewhat analogous to the state of physics in the year 1900. Most physicists at the turn of the 20th century thought that they pretty much had the basic conceptual apparatus, and just needed to flesh out the details (Max Planck's physics teacher famously advised him to take up the piano, as there was nothing left to do in physics but fill out a few more decimal places). But by 1900, there were some experimental results which could not be explained within the current theories (the so-called black body radiation experiments). Some people were beginning to suspect that they were missing a big piece of the picture. This is essentially where we stand with consciousness. The year in which we finally had a complete, unified quantum theory is usually given as 1927, so I figure a few more decades of flailing, plus a margin of about 50% because we don't even have the same sort of firm Newtonian style framework for consciousness that physicists did in 1900.


Physicalism - Are We Really Living In A Material World?

The term "physicalism" may be interpreted in at least two different ways. First, it may be taken to mean the claim that the stuff that the laws of physics describe is all there is in the universe. There is no mysterious other stuff, no magic spray applied to reality above and beyond the photons and electrons, etc., all of which behave strictly in accordance with physical laws. This sounds like a simple enough claim, at least to the extent that one ought to be able to say whether or not one agrees with it, but (bear with me) even this is a little ambiguous.

There's No Such Thing As A Purely Physical World

The second interpretation of the term "physicalism" is the somewhat stronger claim that not only is the stuff that physics describes all there is, but that the laws of physics are a complete description of that stuff (or will be, as soon as we complete our laws of physics). I would argue that this second, stronger type of physicalism is definitely false, whether or not you buy any of the Hard-Problem-of-consciousness arguments.

A good physicist (which is to say a philosophically humble physicist) will tell you that physics provides a way of predicting the outcomes of certain experiments, and that is all. Strictly speaking, the famous Copenhagen interpretation of quantum mechanics applies across the board - "shut up and calculate." If you set up a ramp and roll a ball down it, and you measure all the angles, weights, and stuff like that, you can use physics to tell you things like how fast the ball will be moving at the bottom of the ramp, how long it will take, and how much momentum it will have. If you can ask your questions quantitatively, in a lot of cases physics can (at least in principle) give you quantitative answers and predictions. Physics is not metaphysics - it does not pretend to describe the ultimate nature of reality. As a matter of fact, it can not, even in principle, describe reality "all the way down".

Supervenience

Each hard science rests, in a sense, on the science below it (biology rests on chemistry, chemistry rests on physics). This is to say that, for example, once all the facts about the physics of the universe are fixed (all the physical laws and all the positions and momenta of all physical particles), it is automatically true that the chemistry of the universe must be the way it is, and it could not be any other way. The physical laws and facts necessarily entail all the chemical laws and facts. Another way of saying this is that the facts about the chemistry of the universe are a logical consequence of the facts about the physics of the universe. There is simply no way you could have two universes that were physically identical, but chemically different. In the same way, the chemical facts, in turn, logically entail the biological facts, and so on up through the layers of science. As far as the hard sciences are concerned, once God invented physics in all its detail, He was done - He had no more work to do to invent chemistry or biology. The fancy philosophical word for this is supervenience. We say that chemistry supervenes on physics, because chemistry constitutively depends on physics. Chemistry just is physics, looked at (by us) a certain way, chunked up (by us) a certain way.

Each layer in this pile of science consists of a) extrinsic functional properties (which, taken together, support or implement the layer above), and b) intrinsic properties (which are supported, or implemented, by the extrinsic functional properties of the layer below). The field of biology studies biological entities which behave the way they do ultimately because of their chemistry. Chemistry studies compounds which behave the way they do ultimately because of physics, which these days means quantum mechanics. Quantum mechanics behaves the way it does because…?

At the lowest layer of physics we can, in principle, only know the extrinsic functional properties, those which give rise to the macroscopic physical world we see around us. All we have to describe the world at that level are the famous Schrodinger equations. We do not know, and we can not know, the intrinsic nature of matter and energy described (with nearly 100% accuracy) functionally by these equations. We can say quite accurately how matter and energy behave at the lowest levels, in terms of how they impinge on other matter and energy, but we can't say anything beyond that about what it is that is doing the behaving. Something's functional characteristics are perfectly described by the equations of physics, but we will never be able to know what that something is. Some people (including most practicing physicists) say that there is no "something else" besides a perfect functional description, and that once you have specified how something behaves at the lowest level of physical reality, there is nothing left to talk about. At the very least, it makes no sense to speculate about such things.

Unimplemented API

To use an analogy from computer science, it is as if each layer of natural science could be thought of as a program module. Each module is implemented a certain way, and each presents an API (application programming interface) to the level above. Each module makes use of, or calls down into, the API presented by the level below. Each module does not, and should not, know or care how the level below is implemented, as long as the lower level module faithfully presents the correct API. But suppose that out of curiosity, although we operate at a certain level, we wonder how the API we use at that level is implemented. So we read the source code of the module below and find that it, in turn, relies on an API presented to it by a module still further down. It seems a bit absurd to me to suppose that at some low level we get to the magic API that just is - that is, the API that exists only as an API, but which is not implemented at all!

Rosenberg's Game Of Life Physics

Gregg Rosenberg once used an analogy with the game of life (Rosenberg 1998). The game of life consists of a (possibly infinite) two dimensional grid of bits, or pixels. That is, each square of the grid is either on or off, 1 or 0. There is also a clock of sorts, in that we speak of the state of the grid at time t, where t is an integer. We begin the game with some configuration of on and off squares on the grid, at time 0. For each subsequent tick of the clock, the state of each square on the grid depends on the state of its eight surrounding neighbors at the previous tick according to the following formula: a square that is on in tick t will stay on in tick t+1 if at tick t it had two or three on neighbors; a square that was on at tick t but had any other number of on neighbors will be turned off in tick t+1; any square that was off at tick t but had exactly three on neighbors will be turned on in tick t+1.

Much has been written about the fascinating complexity that arises out of these simple rules. Rosenberg asks us to imagine the game of life as a toy physics, and consider a two dimensional universe in which the rules listed above were the only laws of physics. He then asks whether consciousness could exist in such a universe (it has been shown that one can implement a universal Turing machine - a computer - in the game of life).

Ha - trick question! The rules, as laid out in the game of life, can't serve as a complete specification of a universe, even a toy one. What does it mean to have a pure game of life universe? What does it mean for a square to be on or off? These are properties whose only specification within the game is that they be distinguishable from each other: what is on? It's not off. What is off? It's not on. The properties of on and off are circularly defined, and the rules, then, are defined in terms of these circularly defined properties. Whenever we implement the game of life, we represent on and off with checkers on a board, or more often, electronics in a computer. For us, these properties must be instantiated by some substrate.

You couldn't have a "pure" game of life universe, because the rules and properties as specified underdetermine the universe. There is no such thing as a "bare" property, characterized entirely in terms of its contrast to other properties, but this is exactly what a pure game of life universe asks us to imagine. Rosenberg used the game of life as a toy physics to make the point, but as he says, our own real physics is in no better shape. It is more complicated, so the circle is a bit larger, but "pure" physics in our world makes no more sense than it does in the game of life world.

What's At The Bottom? Information? Nothing?

Physics is a castle in the sky, an elaborate structure built on a foundation of nothing. Or rather, built on circularly defined terms, much like a mathematical system. Each of the lowest-level things that physics deals with (the fundamental particles and forces) is defined mathematically in terms of the other particles, forces, or some constants. Everything in physics, then, is defined relationally, in terms of the other things in physics. Physics gives us a schema, a description of causal dynamics, but it is inherently silent about the stuff doing the causing. Physics is a playwright who writes the dialog but leaves the casting to someone else.

Any system whose parts obeyed the same relations among themselves, or whose parts interacted according to the same patterns of interaction that our physics do would automatically have identical physics to our universe, no matter what its parts "really" were. We could, in principle, transpose our physics to another universe made of entirely different stuff, as long as the causal dynamics of that stuff matched perfectly the causal dynamics of the stuff that instantiates our physics.

Another way of saying this is that the physical universe is multiply realizable. Given a complete and perfect set of physical laws and physical facts, even though all the other hard sciences would be locked in place, God would have still more work to do before He had a complete recipe for a universe. He could create any number of different universes, made of different stuff, but which were physically identical (and thus chemically identical, and biologically identical, etc.), as long as the structures of the causal dynamics among whatever He chose to make each universe out of were identical. It would be impossible for an inhabitant of any of those universes, from within the science of physics, to get underneath the physics and see the intrinsic nature of the matter out of which his or her particular universe was made. This is true completely without regard to any questions of consciousness.

There is only one kind of stuff in the universe, but physics is inherently incapable of completely describing that stuff. This unknowability of the intrinsic properties of the lowest level of reality is going to be a problem for (or at least an aspect of) any physics (as the science of physics is currently construed), and is not particular to quantum mechanics. There is always going to be, in principle, a gap at the lowest level of our descriptions of the natural sciences. Bertrand Russell made the point quite nicely in a couple of quotes: "The only legitimate attitude about the physical world seems to be one of complete agnosticism as regards all but its mathematical properties." and "Physics is mathematical not because we know so much about the physical world, but because we know so little: it is only its mathematical properties that we can discover. For the rest, our knowledge is negative."

There is a world of difference between saying "because we can't know what is at the bottom rung of the ladder, we must remain humbly silent and agnostic", and saying "because we can't know or talk about it, it must not exist." It is this second, positive claim that I do not agree with. We are asked to believe in a world composed of pure causal disposition with literally nothing doing the disposing.

Some people, when confronted by the fact that physics all comes down to circularly defined equations and/or algorithms, draw exactly the wrong conclusion: that our universe is mathematical or algorithmic at its core. Since no matter how advanced our particle accelerators, no matter how true our theories, all of physics must rest on abstract equations, then abstract equations must lie at the bottom of the physical world. Electrons, by this way of thinking, are made of information, quarks are algorithms. This idea was championed by the theoretical physicist John Wheeler, who called it "It from bit".

The map is not the territory. Just because all of our ways of talking about physics must, in principle, bottom out in a cluster of equations, it does not follow that the stuff we are talking about is made of equations. There is still something down there doing the equating. We just can't know what it is, or anything about it other than its outwardly efficacious participation in the causal mesh, which is described so well by the equations. As our technology becomes more and more refined, we can represent more and more information with less and less physical stuff (vacuum tubes to transistors, to integrated circuits, with ever more diodes crammed on a chip). To imagine, however, that the universe itself has perfected its "technology" to the point where it can leave coarse physical matter behind entirely, and instantiate "pure" information, information in itself, is nutty. It is an old, familiar kind of nuttiness, however. It is the same late medieval Platonism that lead thinkers to hypothesize concentric crystalline spheres of ever more rarity and fineness around the earth, with angelic ether filling the void between them, and producing music we are too base to hear.

So on one hand we have a hole at the lowest level of our best descriptions of reality, and on the other hand we have an inconvenient extra ingredient, consciousness, that doesn't seem to fit anywhere in our descriptions, but probably lives at a pretty low level. The idea that the extra ingredient might fit in the hole has been explored by Whitehead, Russell, and Rosenberg. It is essentially the idea behind panpsychism. Panpsychism, at least this form of it, is resolutely monist: there is only one fundamental kind of stuff in the world. This is why I don't like calling panpsychists "dualists" and why I prefer the clearer term "qualophiles" for the whole Hard-Problem-citing, zombie-conceiving, metaphysically speculating lot of us.

Panpsychism? But Doesn't That Have Huge Problems?

"All right," one might reasonably argue, "Maybe we can't know what a quark really is, we can only know exactly how it behaves. So what? My world and my understanding of it, including the laws of physics, remain exactly the same, no matter what the intrinsic nature of a quark really is." To base a theory of consciousness on this unknowability within science of the lowest levels of reality, we have to say not only that this hole at the bottom of physics is filled by some form of proto-consciousness, but that there is some way this stuff, as such, scales up to the level of human minds. Even if some spark of consciousness instantiates the extrinsic behavior of quarks and electrons, those sparks stay atomized at the quark level, and everything else plays out according to the normal laws of physics. In terms of "explaining" human consciousness we are thrown back upon conventional physicalism. The causal dynamics scale up to our level with the underlying qualitative implementation of quarks not having any role in my seeing the redness of red. So now you've made the situation even worse, since a) you haven't solved the problem of high-level consciousness and b) you have needlessly cluttered up our picture of how the universe is put together.

This is known as panpsychism's combination problem. For large, complicated things like ourselves to be conscious in some special way that outruns anything we might expect to emerge from the causal dynamics, we have to explain how this stuff scales up from the level of a quark to the level of a mind.

We also have to say how human-scale consciousness could be meaningfully efficacious. What could it possibly buy us in terms of its effect on the world beyond simply instantiating the lawful low-level regularities that science has already mapped out so accurately? Given the apparent causal closure of the physical world, how could it do so in a way that added anything to what we know from our physical laws and facts, but that did not also violate those laws and facts? It seems that at best, such consciousness would be, as the philosophers say, epiphenomenal: it can't do anything.

All of this blurs into more general problems qualophiles have, like the tricky relationship between qualia and "mere" cognition, perceiving vs. judging, the seeming second orderliness of perception (seeing red is very hard to separate from knowing that you are seeing red), and the whole infinite regress of the homunculus in the Cartesian Theater thing. I will get to these in good time.


Epiphenomenalism: Even If Consciousness Is Real, What Could It Possibly Do?

Epiphenomenalism is the claim that even if consciousness is real in the Hard Problem sense, there is no room for it to be causally efficacious. That is, we may really see red and feel pain in ways that are irreducible to the mindless unconscious interactions of our brains' neuroanatomical parts, but our consciousness is a helpless observer. The mindless unconscious parts still do their mindless unconscious work, including controlling our muscle movements and speech, while the consciousness stays trapped in the press box, experiencing it all, including the delusion that it itself is controlling anything.

The main argument for epiphenomenalism is that since we know physics pretty well, and we are getting better all the time at neuroscience, sometime in the not too distant future we should be able to solve all of Chalmers's "easy problems". That is, we will be able to characterize all of our behavior (even the "behavior" of our mental processing, stripped of any considerations of subjective qualitative consciousness) strictly in terms of nuts and bolts neuronal processing without recourse to notions of consciousness. The physical world is causally closed. That is, every physical thing that ever happens has an understood physical cause. Therefore there is no way that some hitherto undiscovered mysterious force of consciousness could have any physical effect, including the effect of making my neurons fire, my muscles move, etc. If a perfectly accurate physical account can be given of every neuronal event that happens as I type this, or comment on the beauty of a sunset, and this account is given strictly in terms of ordinary physics, then it puts people who believe that the Hard Problem exists in an awkward position. Subjective consciousness, if it exists in the Hard Problem sense, would appear to be redundant, an extra, a loose thread hanging off the natural world, or it would violate the laws of physics.

So does consciousness just watch the processing, without influencing it at all? I know I am conscious in some way that can not be reduced to a functional description of the causal interactions of my micro-parts, and my consciousness certainly thinks that it is in control of my fingers as I type this. It thinks (or experiences) that when I write about how subjective consciousness feels, each word I write is dictated (or at least strongly influenced) by my actual, immediate perception of how subjective consciousness feels.

The epiphenomenalist would have us believe that this is not true, that there is no real contact between the physical body and brain on one hand and consciousness on the other, or at least only one-way contact (which alone is problematic in its own right). So while my consciousness has the perception of writing a sentence about consciousness, and that of commanding fingers to press certain keys on my keyboard, the completely unconscious mechanistic brain is really ordering the very same fingers to type out the very same sentence. Essentially, as far as our actions are concerned, including the ones we most closely associate with qualia, we are zombies. We just happen to have a parasitic consciousness along for the ride, one which is deluded into thinking that it is calling the shots. For this to be the case, of course, the mechanistic processes would have to maintain absolutely perfect synchrony with my actual consciousness throughout my entire lifetime, or my consciousness would notice the discrepancy. It is as if, given a puppet dancing on a stage, we were told that the puppet is really doing the dancing by itself, but so well, in such perfect sync with the puppeteer pulling the strings, that the puppeteer never catches on.

There are some ideas, the old saying goes, that are so preposterous only a philosopher would take them seriously. No, there is no knock-down purely logical argument against epiphenomenalism, but as would-be scientists, we should feel comfortable discarding the more wildly implausible ideas, and epiphenomenalism is such an idea. Evolutionarily, why would nature have played such an elaborate trick on us? Why not just evolve us as zombies and have done with it? It's almost as though epiphenomenalism was cooked up as an idea guaranteed to make everyone unhappy. The physicalists hate it because it takes qualia seriously and no qualophile wants to admit qualia that exist, but don't do anything.

In the epiphenomenalists' defense, there is nothing mysterious about the synchrony between puppeteer and puppet if some third party is actually controlling both of them. The fingers type a sentence about consciousness mechanistically, and the subjective consciousness says (and believes), "I meant to do that." Some volitional center could be controlling both our actions and our experience. In this case, we paint our thoughts with a much thinner coat of qualitative consciousness than we might otherwise think.

In our more generous moods, we might believe that the mechanistic zombie part of us is very complex, and it is worth its while to do some cognitive garbage collection and house cleaning, to investigate and thereby improve its internal mechanisms for absorbing, digesting, and applying information about the world and itself. Self-knowledge, even understood purely functionally, has definite behavioral advantages for a complex enough system. Perhaps our purely cognitive machinery has evolved to constantly self-evaluate, to second-guess all of its conclusions and perceptions. Might not such a system "notice" that at some low level of internal representation it could probe no further, that it could not get inside its seeing of red, for example? Might this impasse attract the system's attention? Could such a system's self-probing possibly end up being externally articulated, like Chalmers' book or this one? Would the system ever come up with an idea like epiphenomenalism? After all, it was the mindless mechanistic neural processing which typed this very paragraph, completely unaided by my consciousness, according to the epiphenomenalist. It is not immediately obvious that the answer to these questions is no. In effect, one can imagine that there are zombie, cognitive, "easy problem" analogs to all of our qualia.

It could be, then, that while there is a consciousness in the Hard Problem sense, it monitors unconscious cognitive processing, as if it had a lot of diagnostic probes alligator-clipped onto exposed wires, so to speak, at various stages of this processing. This almost makes epiphenomenalism respectable, but it is still pretty implausible. If the coupling of qualia to functional states and mechanisms is so very tight that every qualitative state is dictated by a functional state, to the extent that even my wondering about consciousness corresponds perfectly to some functional self-diagnostic probing, epiphenomenalism becomes a moot point. There are not, then, two distinct parts, a mindless functional part and a helpless (but deluded) conscious part; instead there is just one mechanism which has a qualitative aspect. We are aware of every decision we make, every action we perform as our own, because at a very fine-grained level our immediate conscious experience is of the very mechanism that is actually doing the driving. If my mind's functioning has two aspects, cognitive and experiential, can we even say that one aspect is doing all the willful work and not the other? If you couple the two aspects (functional and experiential) closely enough to make epiphenomenalism remotely plausible, then you couple them too closely to say that one is efficacious and the other is not.

Epiphenomenalism does raise a serious challenge, though. If it is false, and qualitative phenomenal consciousness is really guiding my fingers now, as it seems to be, then this spooky mysterious thing called consciousness has macroscopic, observable effects in the real physical world. Where, then is the interface? Why haven't brain scientists noticed by now that certain neurons fire at certain times for no reason that they can explain with current physics? If you accept the Hard Problem, and you believe that epiphenomenalism is false, then you are committed to the belief that current physics is wrong, or at least substantially incomplete in some sense that allows for an as-yet undiscovered force to have a physical effect. Somehow, large-scale, high-level consciousness is able to exert an influence on, for example, motor neurons, and make them do things that they simply would not do if they were only subject to ordinary physical laws without the influence of consciousness. This is a tall order.

When discovered, I suspect that it won't be so much a case of some single event happening that we can't explain, as it will a lot of events, each of which should be random according to accepted physical laws, but which happen in sync with each other, or in some pattern, which once recognized, will be undeniable. Any one of these events, when studied alone, will be seen to obey normal physical laws, but considered together, they will have a pattern and an organization that we can not account for with normal physical laws. I imagine that the influence exerted by consciousness on physical systems will ultimately be compatible with the laws of physics. This, of course, is pure speculation, but it points us in a certain direction. If we are to take the Hard Problem seriously, and if we reject epiphenomenalism, we are placing our bets on some high-level, large scale process, structure, or field that has qualitative content and influences physical things through a loop-hole in physics.


Ned Block's Turing Test Beater

The Turing test is a test for machine intelligence devised by the British genius Alan Turing in the middle of the 20th century. The idea is this: A person conducts a typed conversation with a system. If after some period of time of chatting in this manner, say half an hour, the person conducting the test can not determine that the system they are talking to is not human, then the system is intelligent.

In my opinion, a system that passes the Turing test is precisely a system that passes the Turing test (and is therefore remarkable) but it is not necessarily intelligent (in a sense that does justice to our intuitions of what this term means at any rate), and certainly not necessarily conscious. Turing himself did not mention consciousness explicitly when he formulated the test. Nevertheless, it is tempting (to some) to regard any system that exhibits intelligent behavior as automatically conscious as well as intelligent, while I do not necessarily regard such a system as either.

Consider a few scenarios. First, imagine that instead of a computer taking the Turing test, a committee of three people are being tested. The connection between the committee and the human conducting the test is slow enough that genuine collaboration on each answer among the committee is possible. According to the hypothesis underlying the Turing test, if the committee passes the test, it, taken as a single system, is conscious. How many intelligences or consciousnesses are there then? Three? Four? One?

Another interesting scenario is a slight variation on an idea first presented by Ned Block (1995). You know the old saying that infinite monkeys typing would eventually produce the complete works of Shakespeare? What if, instead of letting our monkeys pound away randomly, we got systematic with that approach and really exhausted the combinatorial possibilities?

Let us say that the test lasts half an hour. Let us also say that the communication line between the human conducting the test (let us call this person the judge) and the system under test (let us just call this the system's side of the conversation) is somewhat slow, but fast enough not to be frustrating to an average human typist, say 50 characters per second. Let us also say that both parties are capable of typing upper and lower case letters, the numerals, the common punctuation marks, say, 100 different characters in all. Given that both ends of the conversation can type at the same time for the entire duration of the test, each of them may type any of 100 characters (or no character at all) each 50th of a second during the entire half hour test. That means there are exactly 100 to the power of (2 (parties) X 50 (characters per second) X 60 (seconds per minute) X 30 (minutes in the test)), or 100180,000 different entire conversations that could possibly take place during the half-hour test, from both parties holding down the 'a' key for the whole half hour, to both of them holding down the 'z' key for the whole half hour.

Now, imagine that we write a simple computer program to generate each of these possible conversations, and that we submit the resulting (staggering) pile of transcripts to a vast committee and give them a huge amount of time to sort them into two piles: pile A of all of the conversations in which the system side of the conversation seemed non-human, and pile B, the (much smaller) pile in which the system side of the conversation seemed to conduct a conversation that would pass for rational human conversation to an average person.

Note that pile B contains the rational-seeming responses on the system side of the conversation, even if the judge's side is gibberish - pile B is selected only on the basis of the reasonableness of the system side of the conversation. In fact, it contains rational-seeming responses to all possible conversations from the judge's side (there are 100 to the power of 50 (characters per second) X 60 (seconds per minute) X 30 (minutes in the test), or 10090,000 of them). Moreover, it contains, for each of the 10090,000 possible judge's sides of the conversation, all possible rational-seeming system sides of the conversation. After all, given any particular judge's side of the conversation, how many ways are there of filling in the gaps so that the system seemed to respond as another human would? A lot.

The committee would then throw the pile A out. They would take pile B, the one with all the coherent, human-seeming conversations on the system side, and load this pile into a computer, along with a very, very simple program. Once the test started, the program would only choose randomly, each 50th of a second, from among the conversations in its memory that are consistent with everything that has already been typed by both sides of the conversation. Once it has chosen a conversation that meets this criterion, it simply types out the character that the conversation says the system should type out at that particular 50th of a second (or no character at all, if that's what the chosen conversation specifies).

This program could be written in about half an hour by any decent programmer, and it would be guaranteed to pass the Turing test, using this huge pile of canned responses, assuming the vast committee exercised proper judgment in deciding which conversations appeared human and which did not. The intelligence in such a system is in the data, programmed in by the human committee, and clearly not in the tiny, stupid execution engine that reads and acts on the data. Given that the Turing test supposedly tests for machine intelligence, not the intelligence of the human programmers of the machine, I think that most people would agree that to characterize such a system as conscious or even intelligent misses the point of consciousness and intelligence.

Assuming that you accept that Block's machine is not conscious (even if, by some characterizations of the term, it is intelligent), if you have a favorite computer architecture that you think is conscious, you really should specify where the difference is between your machine and Block's. Some people insist that a truly conscious computer must be a parallel processing machine, with many processors (inter)acting together. But it has been shown that any parallel processing computation can be emulated perfectly well on a single processor (for each timeslice, you make your single processor simulate each of the parallel processors in turn for that timeslice. Then you move onto the next timeslice. So the whole computation just takes n times as long as it would on an n-processor parallel machine).

Block's machine is monstrously complex - as complex as any you could propose - the complexity is in the table. In essence, the table is the algorithm. Whatever your favorite conscious architecture, it should be clear that its outward behavior would be exactly matched by that of Block's machine. There is some mapping between your machine, with its models-of-self, or its Darwinian memosphere, or whatever, and Block's machine. Both machines are doing the same thing. The only difference between Block's table-driven Turing Test beater and any more "intelligent" algorithm is purely one of optimization.

The difference between the two algorithms is merely one of encoding, much like the difference between a program written in assembly language as opposed to C++, or the difference between an uncompressed file and one that has been shrunk with a data compression utility. Any "true AI" is nothing above and beyond Block's Turing Test Beater, just more efficient, with a lot of redundancies squeezed out. Just because it is easier for you to understand a machine by seeing its bits flipping at a "higher level", or as "representing" this or that, does not make it so.

We all have a comfortable intuition that the "true AI" is doing something special, but it is doing the exact same thing that Block's table-driven machine does, and it is doing it in exactly the same way, albeit more optimally from an implementation point of view. But this intuition that the true AI is somehow fundamentally different than the huge table plus tiny execution engine is anthropomorphism on our part. We have a hard time tracking the complex internal workings of the true AI, and it seems smart, so we assume that something pretty special must be going on in there. The onus is squarely on the defender of some purported conscious computer algorithm to explain exactly where (and why), in the mapping between their favorite algorithm and Block's, the fairy of consciousness waves her magic wand.


Functionalism: Can't We Just Say That Consciousness Depends On The Higher-Level Organization Of The System?

Functionalism, roughly, is the idea that consciousness is to be identified not with a particular physical implementation (like squishy gray brains or the particular neurons that the brains are made of), but rather with the functional organization of a system. The human brain, then, is seen by a functionalist as a particular physical implementation of a certain functional layout, but not necessarily the only possible implementation. The same functional organization could, presumably, be manifested or implemented by a computer (for example), which would then be conscious. It is not the actual physical substrate that matters to a functionalist, but the abstract "block diagram" that it implements. The doctrine of functionalism may fairly be said to be the underlying assumption of the entire field of cognitive science. Or, at least it would be, if cognitive science ever whispered the "c" word, which it does not.

Functionalism gained adherents within philosophy of mind as a response to what are known as identity theories. These are theories that say that the conscious mind just is the neurology that implements it, the gray squishy stuff. Identity theories, however, exclude the possibility that non-brain-based things could be minds, like computers or aliens. Functionalism is predicated on the notion of multiple realizability. This is the idea that there might be a variety of different realizations, or implementations, of a particular property, like consciousness. Another way of saying this is that there might be many micro states of affairs that all produce or constitute the same macro state of affairs, and it is this macro state of affairs that defines the thing we are interested in.

I have several problems with functionalism.

Black Boxes

In order to even have a block diagram of a given system, you have to draw blocks. It is tempting to be somewhat cavalier about how those blocks are drawn when imposing an abstract organization on a physical system. Functionalism tends to assume that Nature drew the lines: that there is an objective line between the system itself and the environment with which it interacts (or the data it processes) and that there is an objective proper level of granularity to use when characterizing the system. Depending on how fine the granularity you use to characterize a system, and the principles by which you carry out your abstraction of it, its functional characterization changes drastically. Functionalists tend to gloss over the arbitrariness of the way these lines are drawn.

The functionalist examines a system, chooses an appropriate level of granularity (with "appropriateness" determined pretty much solely according to the intuitions of the functionalist) and starts drawing boxes. Within those boxes, the functionalist does not go, as long as the boxes themselves operate functionally in the way that they are supposed to. It is central to the doctrine of functionalism that how the functionality exhibited by the boxes is implemented simply does not matter at all to the functional characterization of the system overall. For this reason, the boxes are sometimes called "black boxes". The boxes themselves are opaque, and as long as they faithfully execute their functional role with regard to each other and the system as a whole, we don't care what happens within them.

It is worth noting that, as Russell pointed out, physicalism itself can be seen as a kind of functionalism. At the lowest level, every single thing that physics talks about (electrons, quarks, etc.) is defined in terms of its behavior with regard to other things in physics. If it swims like an electron and quacks like an electron, it's an electron. It simply makes no sense in physics to say that something might behave exactly like an electron, but not actually be one. Because physics as a field of inquiry has no place for the idea of qualitative essences, the smallest elements of physics are characterized purely in functional terms, as black boxes in a block diagram. What a photon is, is defined exclusively in terms of what it does, and what it does is (circularly) defined exclusively in terms of the other things in physics (electrons, quarks, etc., various forces, a few constants). Physics is a closed, circularly defined system, whose most basic units are defined functionally. Physics as a science does not care - and in fact can not care - about the intrinsic nature of matter, whatever it is that actually implements the functional characteristics exhibited by the lowest-level elements.

It could be argued that consciousness is an ad hoc concept, one of those may-be-seen-as kind of things. However I choose to draw my lines, whatever grain I use, however I gerrymander my abstract characterization of a system, if I can manage to characterize it as adhering to a certain functional layout in a way that does not actually contradict its physical implementation, it is conscious by definition. Consciousness in a given system just is my ability to characterize it in that certain way. To take this approach, however, is to define away the problem of consciousness.

This may well be the crucial point of the debate. I believe that consciousness is not, can not possibly be, an ad hoc concept in the way it would have to be for functionalism to be true. I am conscious, and no reformulation of the terms in which someone analyzes the system that is me will make me not conscious. That I am conscious is an absolutely true fact of nature. Similarly, (assuming that rocks are in fact not conscious) it is an absolute fact of nature that rocks are not conscious, no matter how one may analyze them. Simply deciding that "conscious" is synonymous with "being able to be characterized as having a functional organization that conforms to the following specifications…" does not address why we might regard conscious systems as particularly special or worthy of consideration.

Is The Design Inherent In The Implementation?

Functionalists believe that in principle, a mind could be implemented on a properly programmed computer. Put another way, functionalists believe that the human brain is such a computer. But when we speak of the abstract functional organization of a computer system (as computer systems are currently understood), we are applying an arbitrary and explanatorily unnecessary metaphysical gloss to what is really a phonograph needle-like point of execution amidst a lot of inert data.

When a computer runs, during each timeslice its CPU (central processing unit) is executing an individual machine code instruction. No matter what algorithm it is executing, no matter what data structures it has in memory, at any given instant the computer is executing one very simple instruction, simpler even than a single line from a program in a high-level language like C or Python. In assembly language, the closest human-friendly relative of machine code, the instructions look like this: LDA, STA, JMP, etc. and they generally move a number, or a very small number of numbers, from one place to another inside the computer. Of the algorithm and data structures, no matter how fantastically complex or sublimely well constructed, the computer "knows" nothing, from the time it begins executing the program to the end. As far as the execution engine itself is concerned, everything but the current machine instruction and the current memory location or register being accessed might as well not exist - they may be considered to be external to the system at that instant. If someone (say an engineer) were to disconnect the entire rest of the computer's memory except that being accessed between instructions, the computer would not know or care - it would blithely hum along, executing its algorithm perfectly well. Note that in this scenario, both the memory containing the past and future steps in the algorithm itself as well as any data on which those instructions operate are being removed.

But could we not say that the execution engine, the CPU, is not the system we are concerned about, but the larger system taken as a whole? Couldn't we draw a big circle around the whole computer, CPU, memory, algorithm, data structures and all? We could, I suppose, choose to look at a computer that way. Or we could choose to look at it my way, as a relatively simple, mindless execution engine amidst a sea of dead data, like an ant crawling over a huge gravel driveway. If I understand the functioning of the ant perfectly, and I have memorized the gravel or have easy access to the gravel, then I have 100% predictive power over the ant-and-driveway system. Any hard-nosed reductive materialist would have to concede that my understanding of that system, then, is complete and perfect. I am free to reject any "higher-level" interpretation of the system as an arbitrary metaphysical overlay on my complete and perfect understanding, even if it is compatible with my physical understanding. It is therefore highly suspect when broad laws and definitions about facts of Nature are constructed that depend solely on such high-level descriptions and metaphysical overlays.

The higher-level view of a system can not give you anything real that was not already there at the low level. The system exists at the low level. The high-level view of a system is just a way of thinking about it, and possibly a very useful way of thinking about it for certain purposes, but the system will do whatever it is that the system does whether you think about it that way or not. The high-level view of the system is, strictly speaking, explanatorily useless (although it may well be much, much easier for us, given our limited capacities, to talk about the system in high-level terms rather than in terms of its trillions of constituent atoms, for example).

Imagine that you are presented with a computer that appears to be intelligent - a true artificial intelligence (AI). Let us also say that, like Superman, you can use X-ray vision to see right into this computer and track every last diode as it runs. You see each machine language operation as it gets loaded into the CPU, you see the contents of every register and every memory location, you understand how the machine acts upon executing each instruction, and you are smart enough to keep track of all of this in your mind. You can walk the machine through its inputs in your mind, based solely on this transistor-level pile of knowledge of its interacting parts, and thus derive its output given any input, no matter how long the computation.

You do not, however, know the high-level design of the software itself. After quite some time, watching the machine operate, you could possibly reverse-engineer the architecture of the software. It is the block diagram of the software architecture that you would thereby derive that a functionalist would say determines the consciousness of the computer, but it is something you created, a story about the endless series of machine code operations you told yourself in order to organize those operations in your mind. This story may be "correct" in the sense that it is perfectly compatible with the actual physical system, and it may in fact be the same block diagram that the computer's designers had in their minds when they built it.

This only means, however, that the designers got you to draw a picture in your mind that matched the one in theirs. If I have a picture in my mind, and I create an artifact (for example, if I write a letter), and upon examining the artifact, you draw the same (or a similar) picture in your mind, we usually say that I have communicated with you using the artifact (i.e. the letter) as a medium. So if the designers of the AI had a particular block diagram in their minds when they built the AI, and upon exhaustive examination of the AI, you eventually derived the same block diagram, all that has happened is that the machine's designers have successfully (if inefficiently) communicated with you over the medium of the physical system they created.

The main point is that before you reverse-engineered the high-level design of the system, you already had what we must concede is a complete and perfect understanding of the system in that you understood in complete detail all of its micro-functionings, and you could predict, given the current state of the system, its future state at any time. In short, there was nothing actually there in terms of the system's objective, measurable behavior that you did not know about the system. But you just saw a huge collection of parts interacting according to their causal relations. There was no block diagram.

A computer is a Rube Goldberg device, a complicated system of physical causes and effects. Parrot eats cracker, as cup spills seeds into pail, lever swings, igniting lighter, etc. In a Rube Goldberg device, where is the information? Is the cup of seeds a symbol, or is the sickle? Where is the "internal representation" or "model of self" upon which the machine operates? These are things we, as conscious observers (or designers) project into the machine: we design it with intuitions about information, symbols, and internal representation in our minds, and we build it in such a way as to emulate these things functionally.

The computer itself never "gets" the internal model, the information, the symbols. It is confined to an unimaginably limited ant's-eye view of what it is doing (LDA, STA, etc.). It never sees the big picture, little picture, or anything we would regard as a picture at all. By making the system more complex, we just put more links in the chain, make a larger Rube Goldberg machine. Any time we humans say that the computer understands anything at a higher level than the most micro of all possible levels, we are speaking metaphorically, anthropomorphizing the computer1.

A Hypothesis About Hypotheticals: Do Counterfactuals Count?

The functional block diagram itself does not, properly speaking, exist at any particular moment in a system to which it is attributed. Another way of putting this is to point out that the functional block diagram description of any system (or subsystem) is determined by an ethereal cloud of hypotheticals. You can not talk about any system's abstract functional organization without talking about what the system's components are poised to do, about their dispositions, tendencies, abilities or proclivities in certain hypothetical situations, about their purported latent potentials. What makes a given block in a functionalist's block diagram the block that it is, is not anything unique that it does at any single given moment with the inputs provided to it at that moment, but what it might do, over a range of inputs. The blocks must be defined and characterized in terms of hypotheticals.

It is all well and good to say, for example, that the Peripheral Awareness Manager takes input from the Central Executive and scans it according to certain matching criteria, and if appropriate, triggers an interrupt condition back to the Central Executive, but what does this mean? Isn't it basically saying that if the Peripheral Awareness Manager gets input X1 then it will trigger an interrupt, but if it gets input X2 then it won't? These are hypothetical situations. What makes the Peripheral Awareness Manager the Peripheral Awareness Manager is the fact that over time it will behave the way it should in all such hypothetical situations, not the way it actually behaves at any one particular moment.

What If We Prune The Untaken Paths?

Couldn't we save a lot of effort and just make a degenerate conscious functional system, one that was only conscious in a particular situation, that is, only conscious given a particular set of inputs? With the possible inputs whittled down in this way, we could make a vastly simpler conscious machine by making each functional block only capable of dealing properly with that particular system input, and the internal signals that would result from the system as a whole being given that input. The Peripheral Awareness Manager would only be given input X1, so we wouldn't have to program in any capability of dealing with input X2. We could get rid of any tricky calculations the module had been doing and just get it to spit out canned responses to the limited set of inputs we will give it.

Once we simplified the system in this way, it really could not be said to adhere to the functional block diagram anymore at all - it would be hardwired to do one thing, to behave consciously in only one particular situation. At this point, we are on a slippery slope towards something like Ned Block's table-driven Turing Test beater. No one looking at the system without knowledge of how it was designed would ever be able to reverse engineer the original block diagram in all its complexity. The functionalist would say then that it is not conscious. But if we gave it the input for which it was designed, it would do exactly the same thing in exactly the same way that the "conscious" functional system would have when given the same input, down to every last machine instruction. Note that I am not just saying that the system as a whole responds in the stripped-down version the way it did in the full-blown version, but all the inter-black-box signals and internal behavior are the same as well.

The defining characteristics of the functionalist's black boxes disappear without a lot of behavioral dispositions over a range of possible input values, smeared out over time. But there is nothing in the system itself that knows about these hypotheticals, calculates them ahead of time, or that stands back and sees the complexity of the potential state transitions or input/output pairings. At any given instant the system is in a particular state X, and if it gets input Y it does whatever it must do when it gets input Y in state X. But it can not "know" about all the other states it could have been in when it got input Y, nor can it "know" about all the other inputs it could have gotten in state X, any more than it could know that if it were rewritten, it would be a chess program instead of an AI.

We, as designers of the system, can envision the combinatorially explosive range of inputs the system would have to deal with, the spreading tree of possibilities. But the world of algorithms is a deterministic one, and there are no potentials, no possibilities. There is only what actually happens, and what does not happen doesn't exist and has no effect on the system. We anthropomorphize, and project our sense of decision-making, or will, onto our machines. In real life, there are no potential paths or states available to the machine. None that matter, anyway.

Let's say we had a perfectly running functionally defined system chugging along, and its data structures and such were all integrated in just the right way for it to be conscious. But now we played a trick on our system, and we electrically disconnected various memory chunks when it was not accessing them, reconnecting them just as the queries went out on the bus from the CPU for that particular data structure or chunk of memory. It should be clear that the system as a whole would never know. It would run perfectly with its just-in-time memory. Whatever integration it exhibits is purely functional, spread out over time, and takes the form of a whole bunch of "if…then" clauses. I'm not saying that integration in this way is imaginary, just that it does not quite do justice to our intuitions about what "integrated" means. If you ask me a particular question when I am in a particular state, I will give you the correct answer according to my functional specification. You can do a lot of complex work with such a scheme, but adherence to a whole mess of "if…then" clauses never amounts to anything beyond adherence to any one of them at any moment.

If a highly "integrated" system is running, and certain of its submodules are not being accessed in a given moment, the system as a whole, its level of "integration", and our opinion about the system's consciousness, could not legitimately change if those submodules were missing entirely or disabled. Poisedness is in the eye of the beholder. We ought to be very careful about attributing explanatory power to something based on what it is poised to do according to our analysis. Poisedness is just a way of sneaking teleology in the back door, of imbuing a physical system with a ghostly, latent purpose. A dispositional state is an empty abstraction. A rock perched high up on a hill has a dispositional state: if nudged a certain way, it will roll down. A block of stone has a dispositional state: if chipped a certain way with a chisel, it will become Michelangelo's David. That, as the saying goes, plus fifty cents, will buy you a cup of coffee.

We have an intuition of holism. Any attempt to articulate that in terms of causal integration, smeared out over time, defined in terms of unrealized hypotheticals, fails. At any given instant, like the CPU, the system is just doing one tiny, stupid crumb of what, we, as intelligent observers, see that it might do when thought of as one continuous process, over time. To say that a system is conscious or not because of an airy-fairy cloud of unrealized hypothetical potentials sounds pretty spooky to me. In contrast, I am conscious right now, and my immediate and certain experience of that is not contingent on any hypothetical speculations. My consciousness is not hypothetical - it is immediate. The term "if" does not figure into my evaluation of whether I am conscious or not.

Integrated Information Theory

IIT has gotten a lot of buzz recently. Proponents of IIT insist that it is not a functionalist theory, but I see it as the paradigmatic example of one. IIT claims to be able to quantify the degree of integration of a system in a variable called phi (Φ). IIT makes a great deal of reentrancy and feedback loops. All of this integration and reentrancy is functionally defined, however. The integration in integrated information theory is causal integration, smeared out over time, and attributes causal or constitutive properties to unrealized potential events and states.

An algorithmically implemented submodule is a deterministic, causal device. It does not know or care about self-reference. If it pushes a ping pong ball into its output tube, and the ball disappears, it's gone. If, a moment later, a ping pong ball pops emerges from its input tube, it doesn't make a bit of difference to the submodule whether that is the same ping pong ball or a different one sent from a distant submodule.

When we see a recursive computer routine, the Bertrand Russell in us kicks in, and we go: self-reference! Whoa… but the routine simply transferred control to another routine. The fact that the next routine is itself is not interesting and makes no functional difference. We have an intuition that self-reference is weird and special, but it is a mistake to suppose that a machine "acting on itself" must therefore be weird and special. We need to dig and figure out what self-reference means to us, and why it is weird and special in our case.

Besides assuming that there is something special or magic about feedback as opposed to feed-forward signals in themselves, IIT relies upon potential actions and connections, by blunt assertion: if a module is missing or disabled, the phi of the overall system is decreased, but if the module is merely not doing anything at the moment, it still contributes to phi in some ghostly unspecified way.

Worse, IIT bluntly asserts an identity between full-blown qualitative consciousness and phi (i.e. causal integration). It is a brute identity theory, albeit a functionalist one. IIT is the worst of both worlds. It fails to explain consciousness in a convincing way while cleaving to a materialistic world view, but also takes consciousness seriously in the way the materialists say we shouldn't. It's like panpsychism, but less plausible.

Life Is Real. Isn't It Defined "Merely" Functionally?

Couldn't this argument be used to declare the concept of life off limits as well? After all, life is a quality that is characterized exclusively by an elaborate functional description, one that involves reproduction, incorporating external stuff into oneself, etc. Life is not characterized by any particular physical implementation: if we were visited by aliens tomorrow who were silicon-based instead of carbon-based, we would nevertheless not hesitate to call them alive (assuming they were capable of functions analogous to reproduction, metabolism, consumption, etc.).

But according to the above argument, I am alive right now, even though our definitions of what it means to be alive all involve functional descriptions of the processes that sustain life, and these functional descriptions, in turn, are built on an ethereal cloud of hypotheticals. There is nothing in a living system that knows about these hypotheticals, or calculates them, so how can we say that right here and now, one system is alive and another dead, when they are both doing the same thing right here and now, but one conforms to the functional definition of a living thing, and one does not? Therefore, there must be some magical quality of life that can not be captured by any functional description. Yet we know this is not true of life, so why should we think it is true of consciousness?

Like so many other arguments, it comes down to intuitions about the kind of thing consciousness is. Life is, at heart, an ad hoc concept. The distinction between living and non-living things, while extremely important to us, and seemingly unambiguous, is not really a natural distinction. The universe doesn't know life from non-life. As far as the universe is concerned, its all just atoms and molecules doing what they do.

People observe regularities and make distinctions based on what is important to them at the levels at which they commonly operate. We see a lot of things happening around us, and take a purple crayon and draw a line around a certain set of systems we observe and say, "within this circle is life. Outside of it is non-life." Life just is conformance to a class of functional descriptions. It is a quick way of saying, "yeah, all the systems that seem more or less to conform to this functional description." It is a rough and ready concept, not an absolute one. Nature has not seen fit to present us with many ambiguous borderline cases, but one can, with a little imagination, come up with conceivable ones. It is useful for us to classify the things in the world into groups along these lines, so we invent this abstraction, "life", whose definition gets more elaborate and more explicitly functional as the centuries progress. We observe behaviors over time, and make distinctions based on our observations and expectations of this behavior. So life, while perfectly real as far as our need to classify things is concerned, has no absolute reality in nature, the way mass and charge do.

This is not to denigrate the concept of life or to say that the concept is meaningless, or that any life science is on inherently shaky foundations. The study of life and living systems, besides being fascinating, is a perfectly fine, upstanding hard science, with perfectly precise ways of dealing with its subject. I am just saying that "life" is a convenient abstraction that we create, based on distinctions that, while perfectly obvious to any five-year-old, are not built in to the fabric of the universe. Crucially, as we examine life in our world, every single thing we have ever observed about life is comfortably accommodated by this functional understanding of the concept, even if, strictly speaking, it is a little ad hoc.

To be a functionalist is to believe that consciousness is also such a concept, that it is just a handy distinction with no absolute basis in reality. I maintain, however, that our experience of consciousness (which is to say, simply our experience) has an immediacy that belies that. We did not create the notion of consciousness to broadly categorize certain systems as being distinct from other systems based on observed functional behavior over time. Consciousness just is, right now.

What If We Gerrymander The Low-Level Components?

What's more, we can squeeze all kinds of functional descriptions out of different physical systems. Gregg Rosenberg has pointed out that the worldwide system of ocean currents, viewed at the molecular level, is hugely complex, considerably more so than Einstein's brain viewed at the neuronal level. I do not think I am going out on a limb by saying that the worldwide system of ocean currents is not conscious.

What if, however, we analyzed the world's oceans in such a way that we broke them down into one inch cubes, and considered each such cube a logic component, perhaps a logic gate. Each such cube (except those at the very bottom or surface of the ocean) abuts six neighbors face-to-face, and touches 20 others tangentially at the corners and edges. Now choose some physical aspect of each of these cubes of water that is likely to influence neighboring cubes, say micro-changes in temperature, or direction of water flow, or rate of change of either of them, and let this metric be considered the "signal" (0 or 1, or whatever the logic component deals with). Now suppose that for three and a half seconds in 1943, just by chance, all of the ocean's currents analyzed in just this way actually implemented exactly the functional organization that a functionalist would say is the defining characteristic of a mind. Were the oceans conscious for those three and a half seconds? What if we had used cubic centimeters instead of cubic inches? Or instead of temperature, or direction of water flow, we used some other metric as the signal, like average magnetic polarity throughout each of the cubes? If we change the units in which we are interested in these ways, our analysis of the logical machine thereby implemented changes, as does the block diagram. Would the oceans not have been conscious because of these sorts of changes of perspective on our part?

What if we gerrymander our logic components, so that instead of fixed cubes, each logic component is implemented by whatever amorphous, constantly changing shape of seawater is necessary to shoehorn the oceans into our functional description so that we can say that the oceans are right now implementing our conscious functional machine? This is a bit outrageous, as we are clearly having our chunking of logic components do all the heavy lifting. Nevertheless, as long as it is conceivable that we could do this, even though it would be very difficult to actually specify the constantly changing logic components, we would have to concede that the oceans are conscious right now. Is it not clear that there is an uncomfortable arbitrariness here, that a functionalist could look at any given system in certain terms and declare it to be conscious, but look at it in some other terms and declare it not conscious?

Our deciding that a system is conscious should not depend on our method of analysis in this way. I just am conscious, full stop. My consciousness is not a product of some purported functional layout of my brain, when looked at in certain terms, at some level of granularity. It does not cease to be because my brain is looked at in some other terms at some other level of granularity. That I am conscious right now is not open to debate, it is not subject to anyone's perspective when analyzing the physical makeup of my brain. It just is absolutely true. Consciousness really does exist in the Hard Problem sense, in all its spooky, mysterious, ineffable glory. But it does not exist by virtue of a purported high-level functional organization of the conscious system. The high-level functional organization of a system simply does not have the magical power to cause something like consciousness to spring into existence, beyond any power already there in the low-level picture of the same system. As soon as we start talking about things that are "realized" or "implemented" by something else, we have entered the realm of the may-be-seen-as, and we have left the realm of the just-is, which is the realm to which consciousness belongs.

1 Don't anthropomorphize computers. They don't like it.


"We think that grass is green, that stones are hard, and that snow is cold. But physics assures us that the greenness of grass, the hardness of stones, and the coldness of snow, are not the greenness, hardness, and coldness that we know in our own experience, but something very different. The observer, when he seems to himself to be observing a stone, is really, if physics is to be believed, observing the effects of the stone upon himself."
-Bertrand Russell

Reductionism and Emergence: What Kinds Of Things Are There, Really?

Reductionism

Galileo concluded that large objects must fall at the same rate that small ones do by using an ingenious thought experiment. First he imagined two rocks, roughly the same size, dropped from some height, falling at whatever rate rocks of their size fall. Then he imagined that the experiment were repeated, this time with the rocks tied together with a piece of string. Are we really to imagine, he wondered, that Nature would regard the two rocks tied together as one large object, and make it/them fall at a different rate just because they were now connected with a string? He reasoned that Nature would not. When does Nature regard things as actual, individual things, and when does Nature regard them as heaps, or aggregates of other things, like Galileo's rocks tied together? And perhaps more importantly, in what sorts of situations would the answer make any difference?

For an honest hard-nosed reductionist, the universe is really a sea of quantum soup. There are no true inherent things, just one continuous mesh of cause and effect. Minds, and only minds, draw boxes and lines upon reality based on perceived regularities, chunking reality into mid-level murmurations, like "rocks" and "cars". This chunking is an abstraction we impose, and is not there in the quarks, electrons, and photons. We could, in principle, see a certain number of molecules as a "rock", or we could just see it as a bunch of molecules with no loss of accuracy or predictive power. It is something of a joke among philosophers that they sometimes argue over whether something is a table or just a bunch of molecules arranged in a tablewise manner. It's not that tables and chairs don't exist, just that the universe does not these "high level" entities or any properties of them, as such, as it decides what to do moment to moment. All the universe needs to function properly is the very lowest level entities and laws and everything else pretty much takes care of itself.

"Reductionism" is a loaded term, and one that tends to get thrown around pejoratively. Daniel Dennett said that at this point, "reductionist" means nothing more than "I don't like that idea." When I use the term, I will attempt not to make a straw man of it. Reductionism, very roughly, is the divide-and-conquer approach to understanding reality. It is the position that anything just is the sum of its parts. Sometimes philosophers like to say a thing is grounded in its parts, or supervenes on its parts.

Reductionism combined with deterministic physicalism results in the claim that if you knew the exact initial conditions of the universe, and knew the true laws of physics, you could, in principle, predict everything that would ever happen during the lifetime of the universe, including the fall of the Roman empire and the Gettysburg Address. There are no big, large-scale things that can not be understood fully in terms of their simpler, small-scale underlying constituents and their mechanisms.

Now, sometimes reductionism means methodological reductionism, which is simply the practice of analyzing things in terms of their components. Methodological reductionism, as an approach to scientific inquiry, has been spectacularly successful over many centuries. When I speak of reductionism, however, I mean it in a stronger, ontological sense. I mean the presuppositions that:

There are a great many isms in philosophy of mind, many of them downright deceptive, in that their literal meaning does not suggest a doctrine held by most people to whom the label is applied (I'm looking at you, "dualism"). So in theory, whether you are a physicalist, a dualist, a monist, a dual aspect theorist, a qualophile, an eliminativist, an illusionist, materialist, a mod, or a rocker, I think this question cleaves the community nicely: do you believe that everything in the universe can be exhaustively characterized in terms of a small number of types of tiny things, all interacting via causal dynamics, which are described by a small number of mathematical laws? You can answer "no" and still make a case that you are a monist, and in fact, a reductive physicalist, but only by squeaking in on a technicality. Most good reductive physicalists, as the term is generally understood, answer with an emphatic "yes".

My point is that this philosophical reductionism does not necessarily commit one to a particular scientific view. You can hold onto reductionism and admit that we still don't have all the physical laws nailed down yet (strings? Unifying general relativity and quantum mechanics?). If we suddenly discovered that Harry Potter magic is real, we could still be good reductionists: how does it work? Take it apart, see what particles, fields, and/or forces make it up, and derive a small number of mathematical rules that describe their behavior, and viola! So the difference is not which final theory you settle on, and exactly which primitives you admit into your lowest level, as long as they are few in number, are well behaved, and don't have any "essences" lurking beneath that behavior. Indeed, it is really more of a spectrum of views than a sharp division. How many primitives can there be, how big can they get, and how unlawlike and complex can their behavior be before you just aren't a reductionist anymore?

Many prevailing theories of mind incorporate some form of strong ontological reductionism, even ones that make a point of claiming to reject strict reductionism. I think, however, we have reason to doubt that reductionism in this sense gives us a true or complete picture of the world. The problem with reductionism is that it works too well. If everything can be explained or characterized in terms of the lowest level building blocks, there is no reason to consider higher level things as having any objective existence at all, or at least, any explanatorily useful existence. As the saying goes, once the reductionist has broken down the universe, he has trouble building it back up again.

How can we have things in a reductionist universe? By things, I mean just what it sounds like: cars, dogs, planets, paper clips. Is a pile of sand a thing, or is it a lot of little things? Does a car count as a thing? It depends on how you look at it, and why you want to know. What things can there be whose existence (as individual things) is not just a matter of perspective in this way? And do we have any reason to believe that there are any higher-level things in the world that just are the things they are, whether you look at them in the right way or not?

If we are reductive materialists, then speaking absolutely objectively, there is either only one (extremely high-level) thing in the entire universe (the universe itself), or there are as many (extremely low-level) things as there are subatomic particles. There is no absolute reality to any intermediate level things as such. It does not buy you anything (in terms of imparting thinghood) to declare certain systems as unitary wholes on the basis that they are isolated from their surroundings, because everything interacts with everything else all the time. This is not New Age mysticism, but simple fact. The force of gravitation between any two objects is proportional to the product of their masses and inversely proportional to the square of the distance between them. This number is never zero for any two objects, no matter how small the masses or how great the distances involved. I once read somewhere that the gravitational effect of an electron on the trajectory of a molecule of gas a universe away is such that after being amplified by about 50 collisions with other gas molecules, this tiny gravitational nudge is enough to cause the gas molecule's position to be off by the width of a entire molecule. This, in turn, determines whether or not the molecule collides with the next molecule at all or misses it entirely, a difference which quickly changes the dynamics of the entire volume of gas. Whether the correct number of collisions before this happens is really 50 or 50 million, there is some finite number of which this must be true. All particles in the universe interact causally with all others all the time (the contents of black holes possibly excepted).

But still, one might argue, there are some things which act more or less together as one, and are separable from their environment. Consider a toy truck. It seems thing-like if anything does. But just as the computer program does not "know" anything but the current machine code instruction, the truck is just made of atoms, each of which does not know or care anything about "truck" as opposed to all the other atoms that are "non-truck". Each atom only "knows" about the forces that act upon it, and each reacts accordingly. Each atom would still behave the way it does under the influence of any equivalent immediate environment (local to just that atom, that is) whether that environment was the result of that atom's participation in what we might be inclined to call a "truck", or some other, completely different system, as long as it presented the exact same interface to the atom. The atom does not act the way it does because of some high-level organization of the system of which it is a part. A complete knowledge of the forces acting immediately upon each atom in the truck is all that is necessary to have complete and perfect knowledge of the patch of reality that we call "the truck". It gives us complete predictive power over all of the atoms involved, at any level of detail you like. In a completely objective reductionist universe, there is nothing to know about the truck above and beyond all these atoms.

Once we have a complete causal picture of a bunch of atoms, we are certainly free to posit mid-level things as a convenience, but they better not, as such, have any causal powers. Otherwise, we wind up with what the philosophers call an overdetermined world, and William of Occam warned us about those. We use his razor to cut out any multiplied explanations. One is enough, thank you very much.

Teleology

Sometimes people speak of "downward causation", which is not merely causation in the direction of down, like rain or snow, but causation from the "high levels" to the "low levels". There is no such thing. We, as human engineers, may model a device in our minds, then design a system and implement that design in our workshop using a bunch of parts. While it is easy to think of the parts as actively participating in the whole system as such, and because of it, as such, the parts are still blind, stupid, and amnesiac. They do what they do under the same influences as they would if they weren't part of a system. Almost no one, when push comes to shove, actually makes a contrary claim.

Talk of downward causation is closely related, if not identical, to teleology. Aristotle wrote about causation, and he divided it into categories, the only ones of which anyone remembers are his first, efficient causation and his last, final causation. Efficient causation is the kind we deal with when we speak of billiard balls colliding. A causes B because A came first, and straightforwardly exerted a causal influence (pushing from behind, as it were) and brought about B. Final causation, in contrast, has to do with goals and purposes. Telos is the Greek word for such future states of affairs and the effect they have, drawing things forward, pulling from ahead. The telos of an acorn is to become an oak tree.

There is a subtlety here, however. Aristotle was talking about causation as it manifested itself in events, spaced out in time: A causes B. Here, we are talking about things, as they exist in a snapshot, more constitutive causation than sequential causation. The point, however, is the same. The steady march of scientific progress for centuries has been characterized as the banishment of teleology from serious discourse. Anyone who invokes final causes is speaking poetically or magically (the giraffe has a long neck so it can reach the leaves). A ton of molecules, some of them DNA, banging around for eons, subjected to constant Darwinian winnowing, have the effect of seeming like teleology, that's all. By the same token, we should feel funny if we say that a particle behaves differently because it is part of a larger system. I'm not saying that we should never speak in teleological terms. As a panpsychist, I am comfortable getting a little freaky, but we should know that we are saying something freaky when we speak of these kinds of powers.

Emergence

It is sometimes said that higher level properties and thus higher level things emerge from the lower levels in a way that is not determined or even suggested by the lower levels. The flock emerges from the motions of the individual birds, liquidity emerges from the actions of trillions of H2O molecules. The claim that there is genuine emergence in the world is often contrasted with reductionism.

There are several flavors of emergentism (and the closely related theories of so-called nonreductive physicalism), but most of them do not dig their way out from under reductionism as they claim to do. This is because emergence usually reflects nothing more than a cognitive limitation on our part. We are just not smart enough to infer the liquidity directly from a complete knowledge of the H2O molecules. There is no objective, measurable property of a bucket of water (including facts about the liquidity of the water) that one could not, in principle, infer given 1) a complete and perfect description of each atom of hydrogen and oxygen in the bucket (i.e. a complete set of initial conditions), 2) a complete and perfect set of physical laws that described the behavior of hydrogen and oxygen atoms through time as they interacted, and 3) the vast cognitive power it would require to model all those atoms and calculate their interactions.

In general, we are stupid - it is easier by far to frame our understanding of the world in high-level terms, to understand "water" as "sloshing" in certain ways, and even to come up with precise laws about the ways in which water sloshes. But this is just a shorthand way of describing what is actually the aggregate motion of trillions of molecules. This shorthand description does not tell us anything that could not, in principle at least, be derived from the trillions of molecules themselves - its advantage is that it is so much easier to deal with. As David Chalmers has pointed out, emergence is a psychological concept: it is a measure of our surprise at the consequences of low-level natural laws, not a fundamental truth of Nature in its own right. Emergence is a reflection of our faulty intuitions, perceptions, and/or cognitive powers. There are no high-level facts or properties that "emerge" only at the high level. A bumper sticker slogan sometimes invoked by emergentists is "more is different", but actually more only seems different.

It is, perhaps, a tacit recognition of fact that emergence is somewhat weak tea when it comes to explaining the universe around us that in recent years it has been rechristened "weak emergence". This also distinguishes what I'm talking about here from so-called "strong emergence", which is a whole different kettle of fish.

I should emphasize that the ability, for example, to reduce chemistry to physics is an in principle reduction only. No discoveries in the field of physics will ever render chemistry (or biology, or sociology, etc.) obsolete as fields of legitimate inquiry. Even in a universe in which reductionism is absolutely true, the physical world is hugely complex, and its complexities explode out of control very quickly in a chaotic fashion without any hope of being modeled at the low levels by beings with our limitations. It will always be astronomically easier to deal in terms of higher-level chunks of reality than in subatomic terms for almost all purposes. Nevertheless, in principle, if you could model reality at the low level in a reductionist's universe, that would be all you would need to derive any measurable fact about that universe. Any higher level chunking of reality is a cognitive convenience. Put differently, the universe has no need of any "high level" things or concepts as it clanks along one moment to the next. All of the causal heavy lifting is done at the lowest level.

More to the point, emergence (invoked in this way) strikes me as an attempt to dodge the Hard Problem by paying lip service to the idea of qualitative (or qualitative-adjacent) essences (like the liquidity of water, and "higher-level" properties in general) but placing the problem out there in the world, when it is really in here, in our minds. There is no liquidity in the world, except that which is directly inferable from the actions of the H2O molecules (in which case the "emergence" of liquidity melts away as a concept capable of explaining anything), but there is a wetness quale in our minds.

The problem that emergence tries to solve (or at least articulate) is the Hard Problem that dare not speak its name. Proponents of most forms of emergentism and nonreductive physicalism are trying to straddle the fence. On one hand, they have some inkling that strict reductive physicalism is inadequate to account for the universe as presented to us, but on the other hand, they are unable or unwilling to make the freaky metaphysical commitments that are necessary to address these inadequacies. They don't want to have to build any magic into the ground floor of their universe, so they try to slipstream it in somewhere in the middle. The sad truth, however, is that we need real magic here, and all mid-level things in a reductionist universe are only may-be-seen-as kinds of things. The only magic you can slipstream into the mid levels, then, is may-be-seen-as magic.

In a purely reductionist universe, with no absolute thinghood above the subatomic level, no natural mid-level principles of individuation, and everything just more or less dense patches in the quantum soup, I imagine that the mind of God is like that of Neo at the end of the movie The Matrix. If you have not seen it, I urge you to do so - it is great fun and very well done, and touches on some themes that are relevant to discussions (to quote David Chalmers again, don't bother with the sequels).

Much of the action in the movie takes place in an extremely realistic computer simulated reality ("the matrix" of the title). While the characters are really comatose in reclining chairs with data feeds plugged into the bases of their skulls sometime in the distant future, they perceive themselves to be walking, driving, fighting, etc. in late 20th century America. At the end of the movie, the hero, Neo, has an awakening while in the matrix as he confronts the sinister Agents who want to kill him (virtually dying while in the matrix results in actual physical death). The final confrontation had a great special effect in that it captured the essence of an inherently non-visual idea and did so simply and clearly. Neo sees the outlines of the floor, the walls, the ceiling, and the three Agents, but all of their surfaces from his point of view are a wash of iridescent green computer characters, the same ones that were on the screens in the matrix's monitoring center back in physical reality. Neo sees through the matrix, stops accepting it on its terms, and sees straight down to the level of the data of which it is made. And of course, this essentially makes him God within the matrix.

In a reductionist universe, God (if there were God in a reductionist universe) sees everything this way. His mind tracks every last neutrino with perfect accuracy, and He does not have to use our shortcuts of chunking patches of reality into "whale", "bridge", "apple". It is only a consequence of our own perceptual and cognitive limitations that we find it necessary to chunk the universe into "flocks" or even individual "birds". In real life, there are no higher levels. The universe, to a reductionist, models or computes itself at the lowest of all possible levels. Once all the hydrogen and oxygen atoms follow their basic laws, there is neither any need nor room for any further laws about "liquidity", "transparency", or any other high-level properties of water in order for the universe to "know" how water should behave instant to instant. The universe crunches along, doing what it must, not because of any patterns or any way in which such patterns are organized, or because of their purported complexity but because the particular particles with their particular positions and momenta must do what they must do. "Patterns" are a way of categorizing reality for us, a way of setting up a taxonomy of classifications of what are ultimately physical systems. You can't possibly get any magical new properties to "emerge" out of a collection of stuff because it is "complex", above and beyond what you would have gotten out of that same collection of stuff anyway. Anything that is really, really there at the high level must have been really, really there at the low level.

If, that is, we are committed reductionists.

How Naive Is Our Naive Realism About Our Mid-Level Chunks?

There is nothing wrong (in the sense of being incorrect) about our mid-level chunking of reality so we can avoid being eaten by tigers, forage for grubs, etc. any more than there is anything wrong with seeing an apple as red.

Philosophical realism is the claim that the world out there is pretty much as it seems to be. In particular, realism about X is the claim that if X seems a certain way, it's because X is actually that way. If that sounds vague, there is a reason for it - realism can be taken in a variety of different ways. Realism, often modified with "naive", is a position of taking things at face value, and not overthinking them. Naive realism about experience means that If I see something that looks like a red apple, that conscious event corresponds to an actual red apple in the real world. The apple appears red to me because it really is red, period. The apple reflects photons of red light, and they get absorbed by my retinas, and my brain faithfully registers the information that there is a red apple in front of me. Naive realism takes the mind to be merely reflecting the reality out there.

Naive realism in this example is not true, however, because of course there are no red photons. That is, while things seem red to us, all that really strikes the retinas in the backs of our eyes are photons of certain wavelengths. These wavelengths are just numbers representing a particular periodicity that the photons display. There is nothing in those numbers that suggests redness as we experience it. The association between wavelengths of light in a certain range and redness is one our minds make up out of whole cloth. Color is just the mind's way of representing different wavelengths of light, but we could have evolved to use some completely different representation with no loss of information about the real world.

Consider the inverted spectrum argument. If someone were born with their optic nerves cross-wired in such a way that when they were shown red it looked green to them and vice versa; so that in effect their perceived color wheel were rotated by 180 degrees, they might never know it. They would receive the same information about the world, and they would learn the color names as a small child, and they would agree that a sunset is a deep orange, but it would not really look orange to them the way it does to you. It would look teal, but they would call it "orange".

The inverted spectrum argument is usually made to convince people of the distinction between cognitive information and ineffable qualia: my inverted spectrum twin has the same information about the world that I do, but entirely different qualia. I am using the scenario to make a different point, however. It should be clear that, given me and my inverted spectrum twin, there is no fact of the matter of which of us is seeing the "right" view of the world. There are photons, there are perceived hues in the mind, and there is a correspondence between the two. The question of what is the "correct" correspondence between the two just doesn't make sense, since in both cases the actual mapping is arbitrary. Color as perceived - that is, full-blown qualitative, experiential color - serves as a very good carrier of information that comes into our bodies by way of photons striking the retina, but one could speculate on other ways. Perhaps some alien species could consciously discriminate between all the wavelengths of color that we do, but perceive them through some sort of tactile radar-sense, or some other sense modality we can not even imagine. Similarly, while the sensation of redness conveys certain information to us in our visual field, that same sensation could conceivably convey different information. Perhaps our sense of smell could be wired into some perceptual field of color, for example.

If there are no red photons, and color exists only in our minds, what about sounds? By a similar argument, there is no middle C "out there" as it sounds to us in our mind's ear. There are just periodic pulses of fluid pressure. Hot and cold are just the aggregate motion of huge numbers of molecules and similarly could conceivably be represented in our minds with completely different qualia. The same could be said of pressure against skin, smell, and taste. Our qualia are only in our minds, and they are created there.

So at this low level of the qualitative sensory aspects of our world, naive realism is false. Assuming that we can claim to know something about the real world, that the world as we experience it internally is in some way like the world out there, at what level of abstraction does realism start to become true?

I would like to suggest that realism is false at a higher level of abstraction than we generally assume. That is, more of the things we think we perceive about the world are created in our minds than we acknowledge. The real world (almost certainly) exists, and its reality constrains what we perceive, but does not determine it. Most of the structures, patterns, and dynamics of the world are "really" out there and exhibit a lot of the regularities we think they do in the same sense that photons of certain wavelengths are really out there. But as with the redness of those photons, the ways in which we experience them are not really out there. Things are abstractions. We create all things, we infer unity and mid-level individuation in the world.

Seen in this light, consciousness has a much bigger job than just painting the apple red. It must create reality much more broadly, including the apple itself. Just as there are no red photons, there are no rocks, cars, dogs, or numbers. Nature presents us with a wash of particles, a continuous flux of quantum stuff, and we overlay this flux with stories about cars and rocks. Moreover, this story, and the way we create it, is not "merely" cognitive, not just one of Chalmers's "easy problems". There is as much a what it is like to think of an apple, as such, as there is to taste it.


"…however complex the object may be the thought of it is one undivided state of consciousness."
-William James

The All-At-Onceness of Conscious Experience

As we encounter things in the world around us, when do we judge something to be just a heap or aggregate of smaller things, like a pile of sand, and when do we judge it to be a true, unified, single thing? It depends, almost always, on how you look at it. When we look at the world in strict reductionist terms, nothing above the sub-atomic level really counts as a holistic thing. Are there any things above the micro level that really are inherent, single things in a way that does not depend on how you look at them? Do we have any reason to believe that there are, in contrast to the reductionist view, inherently unitary mid-level things in the universe?

I have an art nouveau poster in which a woman is smoking, and there is a stylized curl of smoke rising from her cigarette. When I look at that languid asymmetrical curve, I see the continuous curve in its entirety, all at once. I do not just have some kind of cognitive access to the fact of the curve. The parameters of the curve are not just available to me upon making certain kinds of inquiries. I do not just have a pointer or reference to a lot of data beyond my view that yields results pertaining to the curve when evaluated. The details of my perception are not just at my fingertips, but bang! right there, live, all at once. I see the whole curve now. Of an intelligent computer with its video monitor aimed at the curve (LDA, STA, JMP…), all we can say is that at some level it may be thought of (by us) as seeing the curve. That is, given an abstract understanding of its algorithm and data structures, one may interpret the functioning of the machine as "seeing" the curve. This, however, is anthropomorphizing on our part, albeit on the basis of the computer's deliberately programmed design.

There is, in contrast, nothing "may be thought of" about my seeing the curve. It is not a matter of interpretation. It is an absolute fact of Nature that I really do see that curve all at once, before me. Seen at the low level, as an ant-like CPU crawling over data gravel, there is no inherent sense in which "it all comes together" for a computer, whereas there is an inherent sense in which it all comes together for me.

This is not just another "I see red, the computer will never see red" argument (although it is perhaps related). The "seeing red" arguments focus on qualitatively rich but nevertheless cognitively simple aspects of experience. I am talking instead about our ability to have cognitively complicated scenes before us in our mind's eye, to see the complex as one thing, all at once in its entirety: e pluribus unum.

I would like to distinguish this unity of consciousness from the so-called binding problem, however. The binding problem refers to the fact that, for example, the visual processing parts of the brain and the auditory processing parts are quite different, and in fact take different amounts of time to do their jobs. In spite of these facts, we can have a single experience that incorporates elements from several senses at the same time, and they are synchronized. The binding problem is fascinating in its own right, but what I am talking about here is, I think, at least as fundamental. I am concerned not so much with the way in which different sense modalities (vision, hearing, smell, etc.) can be bound together in a single percept, but how anything at all, even within a single sense modality, can have the kind of unity is does. This qualitative gestalt is every bit as strange and inexplicable as the redness of red.

It could be argued that my percept of a tree is not an indivisible whole: you can break it into parts. But that only means that I have a tree percept, then, often by effort of willful analysis, I have a subsequent follow-on percept of tree parts, albeit possibly with tendrils of reference reaching back to the original unitary tree percept. Just because a cathedral is made of stones, it does not follow that my conception of a cathedral is made of my conception of stones. Even if my conception of the cathedral incorporates the knowledge of the stones, there is still a single experienced percept of the cathedral that subsumes this fact.

My percepts are immediately, manifestly unitary whole things. Regardless of the cognitive or physiological mechanism which supports them, they a counter-example to the doctrine of ontological reductionism. I know I perceive my percepts, and that those percepts really are whole objects just as certainly as I know I see red. Things, in my mind, are qualia, as are all abstractions - manifestly before me, all at once. A thing is an abstraction, and all abstractions are things. In contrast, a car just is a heap of its atomic parts, doing what they must whether you think of them collectively as a car or not.

Consciousness gives us not only examples that there are such things as qualitative essences in the universe, but also that there are such things as things. This argument may strike some people as a case of comparing apples and oranges. "Just because you perceive something as an inherent whole doesn't mean it actually is an inherent whole", one might be tempted to argue. "You are just interpreting it that way." But it is the percept itself, the interpretation, not the thing out there in the world that is being perceived, that I am talking about.

We must take first person experience seriously, both in the seeing red case and in the case of the unity of our percepts. Both (and perhaps more besides) must be explainable in any final theory of nature we concoct. Such a theory must include principles of individuation that allow for the mid-level things that are my percepts. Gregg Rosenberg discusses this quite a bit (although from a somewhat different perspective). To use his term, we need a theory of natural individuals.

There are inherent, absolute things above the level of the quark, but below the level of the whole universe itself. These mid-level things may only exist in our minds, but that is enough to say that they do exist. There are inherent things in my conscious mind that spread across or incorporate any lower level things that might be taken as their elements. Like my seeing red, these things in my mind can not be illusory. If it seems that there are mid-level unitary things among my percepts, then those seemings themselves must be mid-level unitary things. For my unitary percepts to manifest themselves to me as they do, they can not just consist of smaller parts integrated only through causal dynamics, bits bonking blindly into other bits, with some sort of functional description emerging from the bonking. Whatever the crumbs are out of which the universe makes everything else, these things count among them, rather than things built out of the crumbs.

I want to emphasize that when I say that my conscious perceptions are "mid-level" things, I am talking about the scale (between quark and universe), and definitely not implying that these things occupy some middle level of a tree of organization. In that sense, the whole point is that these are low-level things. They are big and complex, yet they must count as primitive objects. They can't be exhaustively characterized in terms of any lower level of description or analysis. There is certainly a huge number of possible conscious percepts - quite possibly infinite. All this being true, we live in a universe in which there is a huge (possibly infinite) number of fundamental components, these components have qualitative essences, and most of them are big and rich, not tiny and simple. Any formulation of reductionism that could accommodate these facts would hardly be worthy of the name.

It is worth noting, however, that this view is nevertheless reductionist in a sense. Everything in the universe may well be reducible in principle to its component parts - it is just that there is no small number of such fundamental components in the universe, and a lot of those fundamental components are pretty substantial things in their own right. The important respect in which it still counts as a form of reductionism is that under this view, you do not get anything out that isn't there in the lowest levels. Specifically, this view does not posit any magic "emerging" from a system on the basis of its "complexity" or functional organization. Complexity and functional organization, defined in causal terms, smeared out across time, and dependent on lots of hypotheticals, doesn't confer the kind of inherent, just-is, really-there kind of unity we need.

Combination Can't Be Functional

Neuron Replacement Therapy

There is a popular thought experiment that goes like this. Suppose that neurologists characterized each neuron's inputs and outputs exactly, and were able to engineer a functional equivalent. That is, an artificial device whose inner workings may or may not be similar to those of a natural neuron, but whose behavior, seen in terms of its responses to inputs, was identical to that of a neuron. Now suppose that the neurons that comprise your brain were replaced with these artificial neurons, one by one. Once your entire brain was cut over to the artificial neurons, you should have a brain system whose functioning at the neuronal level is identical to that of the brain you were born with, but whose workings are entirely artificial, and as such, able to be characterized with an algorithm of some sort.

This thought experiment (called Neuron Replacement Therapy, or NRT for short) is intended to put anti-physicalists and anti-functionalists like me in an uncomfortable position. We either have to say that the resulting artificial brain is not conscious (and if not, at what point in the gradual neuron replacement does consciousness disappear, and when it does, does it wink out all at once or fade out gradually), or we must admit that the artificial brain maintains its consciousness, and therefore full-blown consciousness is realizable by a machine.

Thoughts Are Evidence Of Mid-Level Holism

I agree that there is nothing magic about organic or biological systems. There is no reason that consciousness must be manifested in a biological system. Indeed, as a panpsychist, I think that consciousness in some form is likely manifested all the time in all kinds of matter. The problem with the thought experiment is that it begs the question - it presumes exactly the functionalist reductionism that is, in my opinion, at the heart of the matter. It assumes that what makes a brain a mind does so purely by virtue of the complex interaction of lots of blind little autonomous parts, each not knowing or caring about the others, as long as each has the right interface presented to it. No one knows the details of the relationship between neurons as neuroscientists characterize them and consciousness, but thoughts come whole, nose to tail. A given percept, thought, or moment of consciousness is what it is in its entirety, all at once, or not at all. It has no parts, so you can not swap some of its parts out in favor of "functionally equivalent" parts.

Even if a thought or percept is an example of some kind of fundamental holism occurring in nature, couldn't it still be generated in some way by the orderly, lawful interactions of smaller parts? Possibly, in some sense, but it could not turn out to simply be the orderly, lawful interactions of smaller parts. The interactions of parts may functionally emulate a percept, and they may support it somehow, but they alone can not be it.

Assuming that there will, ultimately, turn out to be necessary relations between the physical world as we understand it and consciousness, the physical correlates of consciousness would have to display or allow for the kind of holism that our thoughts manifest. This has implications beyond the physicalists' arguments about NRT. Contrary to what Phil Goff and Luke Roelfs say, panpsychism's combination problem can not be solved by functional organization alone. Even if the quarks are seeing red, feeling pain, or craving transcendence like crazy, any aggregation of them can not be a basis for larger-scale consciousness if that aggregation is achieved through billiard ball bonking. The "integration" or "high levels" you can get out of causal poking, over time, characterized in terms of unrealized hypotheticals, can't give you the intrinsic all-at-onceness we experience, no matter how hard the quarks are rooting for us.

I want to be clear about the bullet I am biting. I think epiphenomenalism is wrong - qualitative consciousness has observable, causal powers in the physical world. Moreover, it has an inherent, indivisible unity. We either have to be orthodox physicalists, or we must embrace some freaky holism at work in the world: really-there holism, not just may-be-seen-as holism, holism that has causal implications that somehow have escaped the notice of the people in the white lab coats. That's a hell of a needle to thread. I am placing my bets on there being something in the physical world that manifests this, something causal that exists as a whole at a much larger scale than an electron. I am insisting on something that violates the apparent causal closure of physics, or at least bends it quite a bit. Where in the physical world might we find this kind of inherent wholeness, as opposed the to just may-be-seen-as wholeness that functional analysis of systems of parts gives us?

Quantum Mechanics

It has been said that the reason that so many people relate consciousness to quantum mechanics is a sort of conservation of mysteries: consciousness is mysterious, quantum mechanics is mysterious, so maybe they are the same mystery. While the connection between them is admittedly circumstantial, they are mysterious in similar enough ways that we may speculate that at the very least quantum mechanics is a promising place to look for consciousness in the natural world. (See Seager (1995) for a similar line of speculation).

First, we seek a place for consciousness at the very lowest levels of nature, and quantum mechanics is the lowest rung on the ladder, as low as our understanding of the natural world goes. It is the layer of inquiry of which we know only the behavior of the things we study, but we can not, in principle, know the intrinsic nature of whatever is doing the behaving. No one knows what an electron really is, beyond our ability to characterize its extrinsic behavior as described by the relevant quantum laws.

Second, and more to the present point, at least as striking as the qualitative nature of consciousness (what is it like to see red?) is the all-at-onceness of our thoughts and perceptions, their intrinsic unity. Quantum mechanics gives us some counter-examples to the orthodox reductive physicalist way of seeing everything big and complex as (mere) aggregates of tiny simple things. The very strange world of quantum mechanics is populated by bunches of things that come together to form one larger thing that can really no longer be thought of as a heap of separate components. In a quantum entangled system consisting of two particles, for example, we have multiple parts coming together to form a thing that is inherently, absolutely, one single unitary thing. Bose-Einstein condensates are another such example. In a Bose-Einstein condensate, the component atoms lose all individual identity, and the entire condensate is one single thing, with one single Schrödinger wave function. It is simply incorrect to think of a Bose-Einstein condensate as being composed of individual atoms anymore.

As with our percepts, a quantum entangled system is one thing, not an aggregate that may be seen as a thing when looked at or analyzed a certain way. The ontological reductionism inherent in a classical or Newtonian view of the natural world means that consciousness can not find a home in a world that is exhaustively described with such a view. Because quantum mechanics sidesteps this reductionism by providing a real basis for holism in the universe, by process of elimination, we ought to strongly suspect that consciousness and quantum phenomena are somehow related. See (Silberstein 2001) for a discussion along these lines.

Third, there is the problem of the alleged causal closure of the physical world, and the way quantum mechanics, and the holism it implies, allows us to wiggle out of it. The argument is often made that the laws of physics are airtight, that (assuming they are true) they account completely for everything that happens in the world, leaving no room for consciousness to have any measurable effect on anything. Unless, that is, you define consciousness strictly in terms of physical dynamics in the first place, which is to say that you subscribe to physicalism (and thus, in my opinion, define away the interesting questions and properties of consciousness).

It certainly seems that the laws of quantum mechanics are true, and dead-on accurate. The loophole in the causal closure argument may be that while accurate, the laws of quantum mechanics only yield probabilities from an empirical point of view. They specify a distribution curve, not precise predictions. They predict collective behavior with 100% accuracy, but are agnostic about individual behavior.

If you run a quantum experiment 10,000 times, you are assured that your outcomes will fit this curve exactly, and for any one trial, the probability of one outcome over another is determined by the curve, but quantum mechanics is famously unable to tell the specific outcome of a particular single trial. It is an inherently indeterministic theory. Moreover, it is generally accepted that this indeterminacy is not a flaw in the theory or evidence of its incompleteness, but a fundamental feature of physical reality itself. No matter how well you know an electron's initial conditions, once it is in flight, you can not predict its position before you measure it. This is not because of any practical limitation on our ability to characterize the initial conditions of the electron, or any inaccuracy in the theory, but because the electron can not properly be said to have any definite position before you measure it. The position of the electron before you measure it is literally unknowable. It has only a likelihood of being in one place, and a different likelihood of being in another place. So the best theory we have about how the physical world behaves and most interpretations of that theory are, when it comes right down to it, indeterministic about the precise behavior of the physical world at a low level.

The only possible exception to this is the possibility that there are some kind of as-yet undiscovered "hidden variables" at work, and once discovered, they will allow us to predict the electron's position once more with Newtonian accuracy. Albert "God does not play dice" Einstein spent a great deal of his later life looking in vain for a hidden variable theory. Very few people seriously entertain the possibility of hidden variable theories today. Such theories are regarded as a philosophically (rather than scientifically) motivated attempt to restore determinism to the physical world. Even in the classical (i.e. non-quantum) world, it is becoming more apparent all the time that chaos and non-linear dynamics are the norm. Tiny differences at a low level get amplified to huge differences at a high level (as in the oft-cited butterfly effect). There is no reason, therefore, that we would need particularly large-scale quantum phenomena to have a substantial effect on the macroscopic world around us.

"Random" Is A Big Tent

It strikes me that there are many different actual outcomes of a given set of trials of an experiment that would still perfectly fit a given distribution curve, and thus not violate any laws that were given strictly in terms of conformance to such a distribution curve. The statistical distribution of letters I type on a keyboard might be the same whether I am typing a sonnet, a recipe, or meaningless gibberish. A complex coherent pattern may have the same statistical distribution as random noise - indeed, any maximally dense information, (i.e. maximally compressed information), is statistically equivalent to random noise by definition. The door is open, at any rate, for patterns to result from the behavior of quantum systems whose coherence is not predicted by quantum theory, but which nevertheless does not violate the predictions that quantum theory does make.

So - quantum mechanics allows for the existence of high-level entities that are causally efficacious, and whose behavior, while constrained by other entities, has an element that can only be called "random" by our best third-person physical theories.

Maybe consciousness occurs in bursts, in the collapse of quantum superpositions, as Hameroff and Penrose claim. Maybe some kind of large-scale quantum superposition is sustained in the warm, wet environment of our brains by using the tubulin cytoskeleton of our neurons. Maybe not. Something like that, something crazy sounding, however, will turn out to be the case. I speculate that at some point in the future it will be discovered that the brain's activity depends crucially upon quantum phenomena, which are amplified to the level of neurons firing. Of course, the operative word here is speculate. It is worth noting that it is only under certain special types of circumstances that quantum systems can evolve in a state of entanglement or superposition without decohering or collapsing back to a classical state (leaving aside the philosophical thicket of the measurement problem). Under ordinary circumstances, we do not see quantum systems of any great scale (I avoid using the word "complexity" because it implies precisely the wrong thing, namely that a quantum system is made of parts, and that there may be fewer or more of those parts). So like Hameroff, I suspect that we will eventually find structures in the brain that would support some reasonably large-scale quantum superposition which implies isolation from the surrounding environment.

But Back To NRT

Physical systems in states of quantum entanglement display the necessary holism. Further, I have speculated that as quantum mechanics contains the only currently known gap in the causal closure of the physical, the indeterminacies of quantum mechanics are, in fact, the fence around these natural individuals that modern science has built, with a sign that says, "Something funny is going on in here, and we can never know what". For the moment, however, let us set aside my suspicions about quantum mechanics. Perhaps my speculations about quantum mechanics are completely wrong. Perhaps consciousness is some kind of hitherto undiscovered field or force that is modulated or generated by neurons. Maybe Gregg Rosenberg is right, and consciousness is built into the mesh of causation itself. Moreover, no matter how this question is answered, the quantum superposition or force or field that is consciousness could be something that spans lots of neurons, as Hameroff and Penrose believe, or it could be something that happens inside a single neuron, as suggested by Jonathan Edwards.

Whatever thoughts, percepts, or moments of consciousness turn out to be, neurons have evolved to generate or exploit them in some way. But it is they, (these fields, forces, superpositions, collapses thereof, or whatever) who do the heavy lifting, at least in the "what is it like to see red" sense. They quite probably do a lot of the heavy lifting in a lot of other ways as well, including some we might call straightforwardly cognitive (i.e. Chalmers's "easy problems"). If the the artificial neurons in the NRT thought experiment can also exploit or generate these things, then great - consciousness is preserved in the artificial brain. If not, not, and the NRT thought experiment fails. If the field or force or superposition that is consciousness spans multiple neurons, it will be something that can not be carved up and characterized in terms of quantifiable inputs and outputs between neurons, and algorithmic functions that map between the two, and the NRT hypothesis is untenable.

If, on the other hand, the stuff of consciousness (force, field, whatever) happens inside individual neurons, it could be that the artificial neurons will not be able to emulate natural neurons with an explicitly specified algorithm. In this case, the non-algorithmic stuff in the neuron guides the neuron's behavior in non-algorithmic ways. Otherwise, if the stuff in the neuron is emulatable with an algorithm (the epiphenomenal case) the end result of NRT will be a zombie. All of its neuronal behaviors and motor outputs will be identical to those of a conscious mind, but it will not, in fact, be conscious, at least in the "what is it like to see red" sense.

In the latter case, (which I consider unlikely) how will the transition from conscious mind to zombie occur? I don't know. We can speculate on may ways consciousness could fade out as the biological neurons were replaced by artificial ones. Consciousness probably would not wink out all at once. I do not believe that we each consist of one continuous consciousness, always operating at the same level. Just as we operate on autopilot to some degree or another at various times throughout any given day, perhaps as our real neurons were cut over to the artificial ones, the autopilot moments would become more and more frequent and complete in their internal blankness. As the (fewer and fewer) moments of true consciousness happened, they might come complete with the (erroneous) impression of complete, continuous consciousness. This might be simply a result of the moments of consciousness having some sort of cognitive access to zombie memories and extrapolating consciousness onto them. Whatever the scientific basis for consciousness is, there may be lots of consciousnesses active at any one time in our brains. Maybe as our neurons are cut over, there would be fewer and fewer consciousnesses happening within us. The question of how the transition would actually happen is wide open - we just can't know right now. But there is nothing we do know that makes the transition an impossibility.

Even if you succeed in creating a conscious machine, you have no less a mystery on your hands than you do with conscious people, and you are no closer to characterizing consciousness in algorithmic or functional terms. And if you do create something that behaves as if it were conscious and whose workings are entirely specified by an algorithm, you will have created a zombie.


…and I spread it out broader and clearer, and at last it gets almost finished in my head, even when it is a long piece, so that I can see the whole of it at a single glance in my mind, as if it were a beautiful painting of a handsome human being; in which way I do not hear it in my imagination at all as a succession - the way it must come later - but all at once, as it were.
-Mozart, on a piece of music, via William James

Time Consciousness and the Specious Present

We all hear music the way Mozart describes, although usually for much shorter riffs than entire symphonies. I have argued that the all-at-onceness of our thoughts and perceptions is at least as inexplicable as what it is like to see red. The aural/temporal all-at-onceness makes the point at least as vividly as the visual/spatial all-at-onceness of the curl of smoke in an art nouveau poster.

My Notion of Motion

The temporal aspects of consciousness can be illustrated visually too, of course. Imagine seeing dust motes swirl around in the air in the bright sunlight coming through a window, or someone riding a bicycle past you on a street. When you see these things, you see them in motion. That is, your consciousness is of objects in motion, just as directly and absolutely as your consciousness of a red tomato really is of redness. There may be all sorts of neurobiological and cognitive tricks going on behind the scenes, so to speak, but my actual subjectively experienced moment of consciousness is not instantaneous - it has temporality built in. It is, as Horgan and Tienson (2002) say, temporally thick. The motion of something we see moving is not something we infer or conclude or extrapolate, but something we see, right there in the perception, just as much as shape and color. Our conception of time is not, like the weird laws of quantum mechanics, some counter-intuitive scientific theory that our mathematics drove us to accept, but that we will never quite feel in our guts. We do feel time in our guts. A given moment of consciousness does not exist as a snapshot taken at a particular instant, or even a series of such snapshots from which we intellectually infer continuous change. As William James (1952) said,

…between the mind's own changes being successive, and knowing their own succession, lies as broad a chasm as between the object and subject of any case of cognition in the world. A succession of feelings, in and of itself, is not a feeling of succession. And since, to our successive feelings, a feeling of their own succession is added, that must be treated as an additional fact requiring its own special elucidation… (emphasis original)

Or as D. C. Williams put it (1951), "…we are immediately and poignantly involved in the jerk and whoosh of process, the felt flow of one moment into the next."

For perception of motion to exist at all, it must be what it is, in its entirety, over a non-zero period of time. Whatever a moment of consciousness is, if you cut a piece off temporally, it just won't be the same moment of consciousness. You can not be conscious of a piece of music, even a short advertising jingle, without having it temporally in your mind's ear as one undivided thing. As Dainton (2000, p. 127) asks, is a strictly durationless auditory experience even possible? Even of something like a single click? For a sound of any kind to be what it is to you, there always has to be an attack and decay of some duration.

There is a spooky way in which consciousness spans time, and is not what it is at a given instant, the way a hammer is, but can only be what it is smeared out over time. That is, one can imagine a hammer winking into existence for an infinitesimal period of time, then winking out again, and for that instant, it would have been a complete hammer. But my percept of Marilyn Monroe breathily singing "Happy birthday, Mister President" simply takes time. It is a single percept, but it would not be what it is if it were just an instantaneous slice of that experience.

James commented on this also and used (but did not coin) the term "the specious present" to describe the illusion that the present is an instantaneous point. As he said (James, 1952):

In short, the practically cognized present is no knife-edge, but a saddle-back, with a certain breadth of its own on which we sit perched, and from which we look in two directions into time. The unit of composition of our perception of time is a duration, with a bow and a stern, as it were - a rearward - and a forward-looking end. It is only as parts of this duration-block that the relation of succession of one end to the other is perceived. We do not first feel one end and then feel the other after it, and from the perception of the succession infer an interval of time as a whole, with its two ends embedded in it. The experience is from the outset a synthetic datum, not a simple one; and to sensible perception its elements are inseparable, although attention looking back may easily decompose the experience, and distinguish its beginning from its end.

(emphasis original)

Given this immediate, undeniable temporality built into our perceptions, the big question is to what extent does this have metaphysical implications? Put another way, can we account for the subjective experience, the phenomenology of the situation, without making extravagant claims about the nature of the universe?

Extentionalism

On one hand, it could be the case that the infinitesimal point that we usually think of as being "now" is an abstraction foisted on us as a by-product of calculus, and is not real. There may well be no precise point of "present" that divides "past" from "future", and William James's saddleback present is not just a phenomenological or psychological fact, but a literal objective truth of the real world. In this case, our consciousness simply directly perceives a temporally smeared-out reality. Let's call this position the temporal realist position: time really is just smeared out just the way it seems to us, and we simply perceive it directly that way. The part of this position that pertains to experience only is sometimes called the extentionalist position, since it posits that experience itself is extended through time.

Many philosophers of time and consciousness would not agree, but I believe that extentionalism is metaphysically strange. I have been using the term all-at-once to describe a certain holism in our thoughts and percepts, but in the context of the present discussion this is exactly not what I want to convey. All-at-once suggests a simultaneity, an instantaneousness, that is exactly what extentionalism throws out the window. At the same time (har!) I want to preserve the sense of holism at the core of what I meant all along by all-at-once. I perceive things, the length and breadth of them, the beginnings and the ends, and I perceive them as one, at once, if not at one instant. We are all, on a smaller scale, like God as Boethius conceived of Him, taking in all of existence, past and future, in one massive totum simul. In time, no less than in space, we perceive non-zero things as entireties. I perceive, then I may, as a further effort, have a subsequent percept of the original percept as being made of parts, but my percepts are not therefore made of parts. A macro percept is not just an aggregate or composite of micro experiences.

If this temporal holism is true, it is weird. For this wholeness of perception to take place throughout a non-zero length of time, it would seem to be the case that I can see forward in time, or perhaps backwards in time, or both. My mind is at once in immediate touch with the bow and the stern of a percept, and not just potentially or functionally connected to some memory trace of them, but actually reaching through time to them.

It has been asked why it should seem any stranger that experiences have duration than that an electron persists through time. An electron, however, is like the hammer above: the only connection between the electron past and the electron present and the electron future is a causal connection, if one chooses to see it that way. The electron in the future is caused by the electron in the present, but each may be thought of as its own, self-contained, self-defined thing. Experiences are not like that. My experience of a car horn honking would not be what it is at any point in its timeline if any part of it were missing or different. If extentionalism is true, then I actually touch the past and/or future directly with my mind. As I said, weird.

Retentionalism

Must there be such a tight correspondence between time as we experience it and "real" time? Am I so sure that we need time to represent time, or is this just a failure of imagination on my part? Rather, could it be the case that we present time to ourselves using something other than time itself?

It may well be the case that there really is an objective, infinitesimal point of "now", and our minds somehow buffer information from successive moments. As each moment of consciousness happens, it could include this buffered residue from recently past moments smeared out in the appropriate way. Husserl used the term "retention" to describe this. Moments just past are preserved not in long-term memory, but in a retention that is given whole to consciousness all at once. Let us call this position retention theory. Retention theory helps somewhat to overcome the sticky metaphysical problems with extentionalism and addresses the concerns nicely summarized in this quote by Thomas Reid, and which I swiped from Ian Phillips:

[I]f we speak strictly and philosophically … no kind of succession can be an object either of the senses, or of consciousness; because the operations of both are confined to the present point of time, and there can be no succession in a point of time; and on that account the motion of a body, which is a successive change of place, could not be observed by the sense alone without the aid of memory.

It is simply weird to think in terms of actually smeared-out experiences that are nevertheless perceived as one thing, given whole to consciousness. If perception happens at a point in time, then as Reid says, we must employ some kind of retention to perceive succession.

Computers can do a remarkably good job analyzing data (like sound waves) over time without any suggestion of metaphysical strangeness going on. They employ a sort of retention. They map the waveform to data structures, then perform their analysis on the data structures. I'd rather not go into the computer consciousness debate right now, but my argument against a computer having time consciousness would be similar to my arguments against it having any consciousness.

Briefly, we have no reason to believe that a computer perceives duration the way we do. Rather, it computes itself into a particular state (in the technical sense, in which the computer is seen to implement a Finite State Machine). This state is manifested by a particular (possibly quite long) integer. By virtue of being in this state, the computer has a predilection to produce certain outputs that we might interpret as meaning that the computer has "perceived" the waveform, but at any instant during its analysis, the computer was just in a particular state, looking at a tiny crumb of data, and moving to another state as a result.

Some have argued against various construals of retention theory on the basis that it predicts results that we simply do not observe (Thompson 1990, Kelly 2003). In particular, if my seeing the long arc of a pop fly were due to my retention of each successive position of the ball, and superimposing these retentions on my current moment of consciousness, I would see not a ball in motion, but a static arch, perhaps with the longer-ago ball images growing fainter, so that the overall impression would be that of a comet with a parabolic tail. Likewise, if I heard a song according to retention theory, I would hear a cacophony - a simultaneous clash of notes, or at best a chord.

Time-Quale?

I think that these objections are imaginatively constricted and do not give retention theory a fair hearing. Why should we insist on projecting our presumed time-sense onto our instantaneous space of (visual and aural) perceptions in this way? For the time being, speaking metaphorically, let us think of moments of consciousness as points or shapes in a many-dimensional hyperspace, qualia-space. We already know that, for example, each "pixel" of our visual field has a color that can be mapped onto a three-dimensional color space (hue, saturation, and brightness being the axes). In addition to axes for all of the aspects of all of the other sense modalities, there are many other (possibly infinite) axes. There are qualia that are not reducible to the five senses: what is it like to think about your father, to get a raise, to want ice cream?

My intention here is not to get into the no-man's land between qualitative consciousness and cognition but to argue that qualia-space is big and has lots of dimensions. Why couldn't there be one more axis, a time axis, in qualia-space? When we see the pop fly, we see it smeared out, but not in such a way that the smearing-out takes place within the instantaneous visual field, but along the axis of an entirely different quale, the time quale. Similarly, the notes of a song are arranged along this axis, imbued with this quale as well, and not all jumbled into an instantaneous aural experience as a chord. What is it like to experience duration? It just is what it is, and a given conscious experience can have a temporal aspect along with, say visual, aural, and emotional aspects without any of these aspects clashing or having to be mapped to another.

This construal of retention theory does not necessitate anything like a late-night comedy five second delay: each individual moment of consciousness would be experienced as quickly as it could along with a continuum of just-past moments of consciousness, and would then itself also be retained along this temporal axis in qualia-space, to be similarly subsumed by subsequent moments of consciousness. After some time, the longer-ago moments fade out completely. What we perceive as the immediate undeniable passage of time, directly perceived, that is, the time we experience whenever we see anything moving or hear just about anything at all, is, in a sense, an illusion created within the mind. Just as redness is only how we happen to paint our experiences of certain wavelengths of light, and is only arbitrarily associated with that range of wavelengths, this time-quale merely represents actual time, and is only arbitrarily associated with it. The real nature of actual time is then as imponderable as the "real" color of the photons we see as red. Let's call this position the time-quale position.

On the face of it, it seems that the retentionalist/time-quale position is much easier to swallow than the extentionalist/temporal realist position. Why make a (somewhat outlandish) metaphysical claim when you can make a merely psychological one instead? Certainly adding just one more axis to qualia-space doesn't seem much flakier than the claims I have already made about qualia and their implications for Nature, but positing objectively smeared-out presents, and direct perception of them is an entirely different matter.

Ultimately, however, while the extentionalist position implies some metaphysics that are hard to swallow, the retentionalist position is impossible. I once again appeal to direct subjective experience. The retention theory/time-quale position entails a distinction between time as perceived (the time-quale) and something else, which I will call scientific, or actual time. For a retentionalist, time actually passes in the real world, in scientific time, and throughout time, different sensory impressions are made upon the mind and buffered there as they are experienced. These impressions are tagged with a timestamp in some way and ordered appropriately. When the buffered information is presented consciously as part of the just-past of a subsequent moment of consciousness, it is strung together and presented as one thing, imbued with the time-quale, or smeared out along the time axis in qualia-space. Each of the seemingly smeared-out moments we have ever experienced, then, has actually been perceived in an instantaneous flash, and the smeared-outness through time that we think we are perceiving is really another quale, like a new color.

This seems plausible enough at first, but it is a very fine line to hold. That which is mysterious about time, that which seems unlikely to be captured in an instantaneous percept, is not just some collection of facts about scientific time distilled from some formulas, but right there in our immediate temporal percepts. By positing a distinction between scientific time and perceived time, we were trying to let the mind have its temporally smeared-out percepts, but in a way that is metaphysically "safe". The aspects of time that make it metaphysically inconvenient to give directly to consciousness are to be cordoned off in the realm of scientific time, while the mind plays with its instantaneous time-quale, and getting its timestamped retentions in order.

But now we have to ask ourselves if we can get away with this maneuver. Can we separate our sense of duration from scientific, actual time in this way? How much of what we know about time is already built in, inextricably, to our intuitive sense of duration? When we speak of our sense of a non-zero duration being contained in a zero-length instant of "actual" time, to what extent is this the same as (nonsensically) speaking of a non-zero amount of time being contained in a zero-length amount of time? As David L. Thompson said (1990),

…if all our ideas are based on experience, then of course the notion of objective time, as we understand it, (and what else can we speak about?) must be based on experience. The objective notions of scientific time, and any philosophical concepts based on these, must be constituted out of our original experience of internal time.

(emphasis original)

To what extent does retention/time quale theory let the fox into the henhouse? Even if there is a radical mismatch between external time and our experience of time, this may not help if the problems about time consciousness are inherent parts of the time experience. Some people say that while my red experiences are not themselves red, my experiences of time are, and must be, temporal. Are we sure? Or is it conceivable, even in the abstract, for external time to be a completely different animal than experienced time? When I consider, for example, my notion of motion, for an arc of a pop fly to be contained in its entirety in a single instant of "actual" time would mean that "actual" time would have very little in common with any conception of time that I understand. The mysterious essence of time, that which makes it inconceivable to compress into a timeless flash, may already be there in the subjective experience of time. Can everything we experience about time just be the paint we apply to sequences of timestamped retentions, the way red is the way we paint certain wavelengths of light? If so, then time presents no problems for us beyond the familiar problems with all qualia, and retention/time-quale theory is plausible. If not, we are forced to the metaphysical strangeness of extentionalism.

Moreover, as St. Augustine said in Confessions, XI (thanks to Natalja Deng):

If any fraction of time be conceived that cannot now be divided even into the most minute momentary point, this alone is what we may call time present. But this flies so rapidly from future to past that it cannot be extended by any delay. For if it is extended, it is then divided into past and future. But the present has no extension whatever.

People who believe the version of retentionalism that holds that all perception is instantaneous (the "presentists") are failing, I think, to appreciate just how short a time 0.0000… seconds is. There is no action in zero seconds, no activity whatsoever; certainly no neurological activity. I claim that there is no phenomenological activity either.

If you think of a four-dimensional block universe, could there be consciousness in a perfectly "flat", durationless 3-D slice of it? If such a timeslice winked into existence for an infinitely short time and winked out again, and that were the only universe that ever existed, could there be consciousness in it? I think not, and even if you think the answer is yes, then I think the metaphysics of that "yes" are at least as problematic as the metaphysics of the extentionalist position - you are claiming a consciousness that floats completely free of any physical process. If the retentionalist claims that zero does not really mean zero, but just some pretty short time, then the genie is out of the bottle: time itself is necessary to perceive time, and you might as well call yourself an extentionalist.

As to the reluctance to bite a metaphysical bullet when we might be able to get away with biting a psychological one instead, I have already argued that we have to bite a metaphysical bullet anyway to see anything all-at-once, even a stick lying on the ground. Extending this into the realm of the temporal as well as the spatial and conceptual is not much more outrageous, and may actually clear up some of the mysteries surrounding time that have nothing to do with consciousness.

There is a lot that science does not understand about time, and consequently is silent about. Science generally treats the universe as a four-dimensional block, with the Big Bang at one temporal end. Leaving aside some wrinkles involving relativity, science speaks of points in time, just as it does points in space, and these points can be thought of as three-dimensional cross sections or slices taken out of this block universe. But nowhere in science, certainly not in physics, is there any mention whatsoever of a constantly moving privileged point or timeslice called "the present". What makes now now? Is it just a psychological trick? My point here is that the hard sciences are superb at describing the things they do describe, but there is a great deal of room in the places where they are silent for conjecture about what is really going on. Speculations about real live smeared-out presents, and different presents of different durations for different consciousnesses, do not so much contradict any scientific facts as they try to fill in some of those gaps.

If a thought or percept is temporally thick, what exactly does this mean? I have a strong intuition that my percepts exist through time, that I have direct experiential contact with something that spans a non-zero amount of time. Does this mean I see the future? Not really. That would imply that I have an experience at one point in time of something that takes place at another point in time. it does not really make sense to speak of any experiences at a point in time. They don't come in points.

I do not know how experiences are individuated, or if there ever will be any hard and fast criteria for individuating them. But part of the point of calling my qualitative subjective experience qualitative is the claim that however an experience may or may not be individuated as you scale up, you certainly can not subdivide it by scaling down. Experiences tend to fuzz out around the edges, and it may be hard to tell exactly where their outer boundaries are, but I am certain that somewhere within those fuzzy boundaries, an experience must be what it is in its entirety, as a whole, not as a function of any "parts". What I am now suggesting is that this indivisible, all-at-once whole exists as it does over time, in addition to whatever other sense in which it might exist.


Free Will

Anti-physicalists are often accused by physicalists of trying to sneak God in the back door, or some watered-down version of God, like the soul, or just some notion of the inherent specialness of human beings. While most anti-physicalists do not harbor such hidden agendas, they are sensitive enough to the accusation that they sometimes wrongly neglect branches of inquiry that might seem to lend circumstantial weight to it. One such branch of inquiry is the issue of free will.

The question of free will is one of philosophy's most frequently asked questions. I once believed that either the question was incoherent or the answer was no. People do have some powerful intuitions about free will, though, and it is worth trying to clarify and articulate those intuitions if for no other reason than that the question keeps coming up again and again over the millennia.

In philosophical debates, people generally fall into one of three categories when it comes to the question of free will. First, there is free will eliminativism (there is just no such thing as free will). Then there is compatibilism, which says that while from a scientific point of view, we are effectively deterministic machines, this still allows for any notion of free will worth having. Finally, we have free will libertarianism, which is the whole-hog belief in Free Will (capital F, capital W). Not many people these days admit to being libertarians.

Two Cheers For Compatibilism

No matter what metaphysical commitments you have, you believe in free will. Not in any grand fundamental sense, but in an everyday sense. Ever been on a diet? Ever looked at the apple you brought as an afternoon snack, but couldn't help thinking about the Snickers bars in the vending machine down the hall in the break room? You know what I'm talking about. The question, then, is what mechanisms implement this.

I am actually pretty sympathetic to compatibilism. No one denies that the mind is very complex, and that there are a good many levels of functional organization between any putatively deterministic molecules bonking around in my neurons and my feeding a sweaty, wrinkled dollar bill into the vending machine. If the free will that we experience turns out to be implemented by a deterministic substrate, way, way down, it would be hubristic to be bothered by this. Who among us can claim to have such a tightly integrated picture of reality across so many levels of organization that it matters to them that their decision to get a second Snickers bar half an hour later and leaving the apple to rot is manifested by zillions of deterministic atoms rather than non-deterministic atoms? In general, however, the debates around free will concern the full-bore libertarian kind. This is the kind of free will that is philosophically interesting, as opposed to (or in addition to) being psychologically interesting, so hereafter that is what I mean when I speak of free will.

What Even Is Free Will?

As philosophers, we are free to define terms like "free will" any way we choose, but if we stray too far from common usage our speculations become a purely technical exercise. What are the intuitions normal people draw upon when they use the term "free will"? Is free will a quaint human vanity? What are we even talking about when we ask about free will? Can we frame the notion of free will in such a way that it is even coherent yet still respects our rough intuitions? What would a mind have to be like for it to have free will, and how would it work? What kinds of natural laws would there have to be in a universe for us to be able to say that that universe even allowed for any intuitively satisfying notion of free will? Is our universe such a universe? If we philosophers get this wrong, will our justice system crumble, causing society to collapse into brutal barbarism? On this last question, it turns out I am confident that no one - absolutely no one - cares one whit what philosophers think. Do not worry about the social implications of your metaphysics, especially since you can never know how society would interpret it anyway, even if they were to accept it as true.

We all have some ideas about free will and have probably read about it, but before I get into philosophical speculations, I'd like to highlight some of my own off-the-cuff pretheoretical intuitions. There are certain aspects of free will that I think are baked into our common understanding of the term but that for whatever reason, do not get enough explicit consideration in the literature.

Free Will Does Not Need To Be Hooked Up To A Motor System

Free will is often thought of in terms of action, in terms of how I might impose myself upon the world. This, however, is not a necessary ingredient. Free will, if it exists at all, is an aspect of consciousness, and not at all dependent on my ability to act on it. That is, if we decide ultimately that free will is real, it will be something that I possess even if I am lying completely paralyzed in a hospital bed, as long as my conscious mind is functioning. The kernel of will exists, if it does at all, independent of any ability to impose it upon the world. Lying in the bed, I can allow myself to wallow in self-pity, rage, and despair, or I can decide to spend my time calculating sequences of prime numbers, or I can try to truly forgive everyone who ever wronged me and attain a state of perfect internal peace. These constitute willful decisions, and they are no less willful if I die without ever having recovered the ability to act outwardly upon them, even to the extent of telling anyone else about them.

Free Will Is Inherently Creative

Free will is too often characterized in terms of selection among a limited set of options: choose one entree from column A and a side dish from column B. While often will manifests itself ultimately as a selection like that, the force behind that selection is an exercise of creative visualization. We envision different outcomes, different futures, different selves, and therein lies the will, even for something as mundane as ordering Chinese food. It is creative will that leads an artist to paint a particular painting in a particular way. Most of us have had experiences of this kind at one time or another - being in that creative groove is an essentially willful state of mind. Will is creative in an unbounded, open-ended kind of way. When an ancient ruler decides that when he dies, a man-made mountain should rise from the desert to be his tomb, and that tens of thousands of slaves should work for decades to make that happen, that is a monumentally willful act. Will is about creating the options in the first place as much as it is about choosing among them.

Free Will Is Constitutive Of Self And Not Necessarily Non-Deterministic

People often say that I do not have free will if my actions are rigidly determined by the actions of the parts of which I am made. If all the little parts are just doing what they must according to the laws of physics, there is no way the whole could be doing something above and beyond the sum of the parts - the whole just is the sum of the parts. And if the whole somehow had this thing called free will, and this free will had any causal efficacy whatsoever (like the ability to move my arms or legs, or to make my fingers type), it would be a ghost in the machine: somewhere in my body there is at least one molecule that, under the influence of this purported free will, does something different than whatever it would do if it were not under the influence of this free will. That is, if the molecule (or cell, or muscle fiber) were acting only in accord with the physics that govern such things, it would behave one way, but under the influence of free will, it behaves another way. This would seem to imply that free will (of the whole-influencing-the-parts variety) necessarily violates the laws of physics. But no scientist anywhere has seen any violations of the laws of physics at work in the human body or brain.

Free will is most often contrasted with determinism, but this strikes me as something of a false dichotomy, even for a hard-core libertarian. Depending on what we end up deciding free will is, whether or not determinism precludes free will, indeterminism does not save it. Famously, the equations of quantum mechanics, the most successful scientific theory ever, are non-deterministic. That is, they predict outcomes of experiments within a statistical range, but there is always a random factor in the prediction of a particular single trial of an experiment. Moreover, this indeterminacy is generally believed not to be a fault of the equations, gaps to be filled in by future scientists, but a fundamental feature of physical reality.

Some people look hopefully to this indeterminacy of quantum mechanics to give free will a toe-hold in the natural world. There may be something to this, but it is not quantum mechanics' indeterminacy alone that does the trick. If I am made of my parts, if I just am my parts, then I am in the thrall of their functioning, whether those parts function according to deterministic Newtonian physical principles or indeterministic quantum ones. According to my intuitions of what is meant by free will, it buys me no more free will to believe that somewhere in my brain, my decisions are being made by some electron jumping or not jumping to a higher energy orbit within a certain time (no matter how unpredictable beforehand) than to believe that my entire mind functions predictably like a clock.

Moreover, while indeterminism does not by itself save free will, I do not believe that determinism by itself necessarily dooms it. If you made 1000 atom-by-atom copies of me, and each one of them acted in exactly the same way when put in the same situation, it is arguable that it would not necessarily threaten any sense of free will that I may have. My decisions may be freely made, even if I would make the same ones in the same circumstances every time. This may seem initially counter-intuitive, but at least according to my personal sense of the term, free will does not necessarily mean that I have some random X-factor driving my decisions. Some of the most willful decisions we make seem somehow inevitable. Daniel Dennett cites Martin Luther, who, upon taking the possibly suicidal (or worse) stance of denouncing some of the practices of the Catholic Church, said, "Here I stand, I can do no other." Luther's actions were a deep expression of his character. He could not be the person he was and act otherwise. Given who he was, he was bound to do what he did, yet (again, according to my intuitions) his was a quintessentially willful act. When you exercise your free will, you are not merely deciding what to do, you are deciding what to be. You creatively envision a future, and a future self, then you instantiate that future.

This sort of willful determinism is also described quite well by C. S. Lewis (1955) as he recounts the defining moment in his life in which he abandoned his youthful atheism:

I felt myself being, there and then, given a free choice. I could open the door or keep it shut; I could unbuckle the armor or keep it on. Neither choice was presented as a duty; no threat or promise was attached to either; though I knew that to open the door or to take off the corslet meant the incalculable. The choice appeared to be momentous but it was also strangely unemotional. I was moved by no desires or fears. In a sense I was not moved by anything. I chose to open, to unbuckle, to loosen the rein. I say, "I chose," yet it did not really seem possible to do the opposite. On the other hand, I was aware of no motives. You could argue that I was not a free agent, but I am more inclined to think that this came nearer to being a perfectly free act than most that I have ever done. Necessity may not be the opposite of freedom, and perhaps a man is most free when, instead of producing motives, he could only say, "I am what I do."

We define ourselves by our choices. We drag our future selves into existence through our will. William James (1952, p. 288) said, "The problem with the man is less what act he shall now choose to do, than what being he shall now resolve to become."

I think that people feel that determinism threatens free will because it seems to imply that the mind could be accurately modeled by some other system, rendering the will moot. Free will can exist in a world in which the entities having free will act the same way in the same circumstances (i.e. they behave deterministically), but not in a universe in which you could predict that behavior. If my mind is a system that always behaves the same way when it is in state X and given input Y, then any system that could produce that behavior when given input Y in state X for all appropriate behaviors and X's and Y's would be able to second-guess all of my decisions with perfect accuracy. Yet such a system, being nothing but the functioning of its parts, would not be exhibiting free will. It would have no greater identity (none that was causally efficacious, anyway) above and beyond those micro-parts. If the system has no free will and it provably behaves exactly as I do, it certainly seems that any supposed free will that I possess doesn't buy me much.

The threat to free will posed by determinism in such a scenario, though, is not determinism itself, but the fact that it seems to imply that I could be modeled by a system whose behavior is transparently determined by the dynamics of its parts. The problem that determinism poses for free will, then, is that it implies a kind of fundamental, ontological reductionism. However we end up defining me, I may behave the way I do deterministically and still have free will, as long as it is not a reductive determinism, driven exclusively by the functioning of my parts. Conversely, if I am driven strictly by the functioning of my parts, then their being randomized in some way (e.g. according to the principles of quantum mechanics) does not save free will.

Free Will Is For Partless Wholes

Ultimately, the status of free will does not so much depend on whether or not we live in a deterministic universe as it does on whether we live in a universe in which strong, ontological reductionism is true. Regardless of the particular laws that describe the low-level entities in any given universe, if all things in that universe are either those simple low-level entities or high-level things that are nothing more than aggregates of the low-level entities, and all of the behaviors and properties of the high-level entities fall out as inevitable consequences of the behaviors and properties of the low-level entities, then free will (at least as something possessed by the high-level entities) is an incoherent concept.

The claim of free will ultimately depends upon there being some kind of holism at work in the universe. Specifically, for free will to exist in me, it is necessary that I am an intrinsic, inherent individual (i.e. that seeing me as one single thing is not just some way of looking at the pile of matter that is generally considered me); that whatever Nature's principles of individuation are, I count as one of Nature's individuals; that I am a partless whole. Another way of saying this is that for free will to exist, some form of (very) strong emergence must be true. There may be more involved than this, but for there to be free will, this much at least must be true. For me to have free will, I must not be in the thrall of the functioning of my parts, no matter what the operating principles of those parts are, whether those parts function according to deterministic or indeterministic laws. My actions and my future state must depend on some qualitative essence of a holistic, indivisible me.

If the universe does, in fact exhibit the required type of holism, the principle of parsimony of natural laws must be discarded - we are stuck with an extremely baroque picture of the natural world. In such a world there would not just be a handful of fundamental things of which everything is made: photons, quarks, electrons, neutrinos, etc. and a relatively small number of laws that describe the interactions of this handful of fundamental things. We would instead have an infinite number of fundamental entities, these entities would be complicated, high-level seeming sorts of things, they might be transient, and each would have its own set of laws.

Do Large Partless Wholes Obey Laws? Does Anything?

This lack of parsimony does not make such a scenario inconsistent or obviously incorrect, however. Imagine that something with free will is an entity whose behavior springs from its own particular nature, such that it generates, manifests, and in fact is its own law, the law of nature that applies only to it. It is an entirely novel thing in the universe, like a new elementary particle. What it does from instant to instant is a surprise to everything else in the universe, including the universe itself. Its behavior, after the fact, could be considered a new law of nature, if one insisted on clinging to that terminology. Furthermore, once the moment is gone, its law will never apply to anything else. In this scenario, the terminology of "things" "obeying" "laws" breaks down and becomes meaningless. If I act the way I do because of the inherent nature of the thing that I am, and what I am will never be repeated, one could say I obey my own custom-made law of nature, of which I am the only instantiation at a particular moment. Or one could not.

This is really just the degenerate case of any law of nature, in that all such laws are inductively derived. There is a sort of Platonism hiding in the concept of a law of nature. In real life there is no such thing. No electron in the history of the universe has ever obeyed a law. Balls on ramps and electrons do what they do not because of some law that they all know about, but because that's what they do. Each electron, without reference to any other electron, and without reference to the way it is supposed to behave, acts like an electron. Each one has somehow memorized, or "knows" its patterns of behavior. Its behavior is built into each electron individually. The law, such as it is, must be written into the hardwiring of each electron, copied a hundred zillion zillion times over, for as many electrons as there are in the universe. No one is obeying anything. As it turns out, all electrons behave pretty much the same way (for unknown reasons), so we write down a general characterization of that behavior and call it a law, and from then on we can speak as if all the electrons in the universe "obey the law".

A law of physics is something we invented, an abstraction, a convenient fiction to help us track the behavior we observe after many trials. Indeed, the whole terminology of "laws of nature" or "laws of physics" strikes me as an Enlightenment-era metaphor with a bit of cultural baggage attached to it, one that we have accepted into our ways of talking and thinking. It reminds me of the Victorian Rudyard Kiplingesque statement that the lion is the "king of the beasts". I can see why a someone of a certain era might phrase it that way, transposing a familiar hierarchical political order onto the natural world, in which no one preys upon a lion, but that's not really the way ecosystems work. Calling a lion the king of the beasts, like calling electron behavior a law of nature, says more about the mindset of the speaker than it does about lions or electrons.

What about any "laws" that apply to unique, high-level individuals? If we only have one data point, and always will only have that one data point, it really becomes a matter of preference as to whether to call the behavior of such an individual a law or random behavior. Any unique one-off "laws" that apply to the high-level entities would necessarily be forever unknowable to any outside observer. Looking at such an entity from the outside, its behavior would have to appear to have a random factor in it. Any system of laws applying to a universe with such things in it would characterize the regularities of the simple, low-level things as well as it could, and simply throw up its hands when it ran into the behavior of the high-level entities, labeling it as "random".

We would have a sort of dualism then, but it would be an epistemological dualism, not an ontological dualism. There would be only one universe with one kind of stuff in it, but there would be a division between that which we could characterize completely in third-person terms, and that which would be forever closed off to our laws and theories. In short, in such a picture of the world, given the characterizations a) I act randomly, b) I act out of free will, an expression of my inherent nature, or c) I act deterministically, obeying my unique law, it is perfectly valid and consistent to say d) all of the above.

In practical terms, if our world is really like this, it is unlikely that we could model my behavior with a machine, because the "laws" that determine my operation are unique to me at each instant (the "me" at each instant being different, each with its own law(s)), and undiscoverable without being me. And even if, by some chance, a machine could model my behavior perfectly for a time, say ten hours, there would be, in principle, no way to be sure that it would continue to do so for even one second more.

Oddly, such a view is actually a form of physicalism, in that it posits a physical basis for consciousness and free will, although one that is quite different from that which most physicalists suppose is true. Even if there are these high-level fundamental entities with their own one-off laws, there are still low-level entities like electrons and photons and their more generally applicable laws. Any claims we could make about the high-level entities and the ways in which they behave must not violate the more commonly known basic physical laws that describe the behavior of the low-level entities. Given that whenever we look at the world, all we see is the low-level entities, and their behavior seems pretty unmysteriously described by the physical laws that apply to such entities, is there any wiggle room for these purported unique high-level entities to do anything? Where are they hiding? This is another version of classic physicalist's challenge regarding the causal closure of the physical world: the dynamics of the world and everything in it are completely nailed down once we nail down the dynamics of the low-level stuff (the physics). However, this is not as true as it appears.

As it turns out, modern physics does characterize the behavior of the fundamental constituents of matter in the way I have said a free will-supporting universe would have to work: we know roughly how things will behave, but there is always an irreducible random factor. Quantum mechanics tells us that the low-level entities are governed by statistical laws only. The exact behavior of the low-level entities is thus not exhaustively and unmysteriously governed by laws - there is an irreducible "random" factor. There is, therefore, some wiggle room for consciousness (or, if you prefer, qualitative high-level physical entities) to be causally efficacious, to exert some extra influence on material things in the universe without violating any known laws. In effect, consciousness exhibiting free will would be a "hidden variable" in a correct physical theory, according to this hypothesis. Crucially, quantum mechanics also gives us examples in the real world of these indeterminate entities scaling upward from the level of the single subatomic particle in the form of entanglements, mixtures, and condensates.

Who (Or What) Would Possess Free Will?

How Many? How Long Lived?

We already have truly qualitative consciousness, and this consciousness constitutes a big, complex, indivisible whole. Moreover, this consciousness is efficacious. If you buy all of this, is there any room for there not to be a robust, libertarian free will in our world? But what can we say about how to individuate whatever it is that has it?

Let us imagine that the consciousness that has free will is a short term thing, more of a moment of consciousness rather than a constant, cradle-to-grave kind of consciousness (see Strawson 1997 for a good article about why this is a plausible, and perhaps the most plausible, way to talk about the self, or his longer work (2009)). We should also be careful about any assumptions about the number of consciousnesses that comprise me, in addition to how many there are across time. It may turn out that "me" is made of a conglomeration of lots of consciousnesses or moments of consciousness. There could be a fundamental sense in which consciousness is real, and possesses free will, and nevertheless the persistent unified self is a useful fiction, at least as we conceive it.

How Invasive Is The World?

Besides the number and longevity of any freely willing entities that comprise an agent, there is a question about the boundaries between it and everything else. Free will is creative, even self-creative, but it is not just riffing in a vacuum. Somehow, we are aware of things. That is, some stuff from the world, over which we have no control, imposes itself upon us. So external stuff changes an agent's internal state, but in a way that the agent nevertheless exercises creative control over. Neither is it the case that the missing ingredient is a complete specification of the agent's ever-changing internal state. If that were the case, then the input from the outside world, together with the internal state, would dictate the agent's output and next internal state in the manner of the classic Finite State Machine of computer science, and we would be back to bare-bones functionalism.

A willful agent would have to incorporate external information into itself as part of its internal field of perception, but could stand back from it as it were, and regard it. Only in the context of free will does descriptive information (as opposed to prescriptive information) make sense. Central to the intuitions we have about free will is the claim that an agent gets to survey reality, then decide what it wants to do about it. It gets to be an unmoved mover, an uncaused cause. It gets to be aware of things without that awareness constituting an algorithm that it must execute. Free will is built into the concept of descriptive information and vice versa.

But does the idea of purely descriptive (i.e. non-prescriptive, non-algorithmic) information make sense? Doesn't the outside world, through physical causation, stick its fingers into you, and play you like a piano? To the extent that you are aware of it, it has pressed itself upon you, it has forced you to conform to it. Under pain of the infinite regress of the homunculus, you are not separate from your thoughts and percepts. As William James said, the thoughts are the thinkers. How can I be all one thing, shaped by reality, but also stand apart from my (descriptive) percepts as a detached observer? What does it mean to be aware of things, if I just am that awareness?

Free will is creative above all if it is anything. It doesn't have to be about anything, in particular, it doesn't have to be about the outside world. Remember, we are not talking (just) about picking options off a menu, even a very complex one, but about creating options in the first place. The "direct" percepts from the senses (I wish there were scare quotes even scarier than normal scare quotes, maybe ""direct"" percepts) don't constitute fingers playing us so much as they are elements that we might (or maybe should) incorporate somewhere in our created-moment-to-moment conscious field.

However short or long lived, however many there are between my ears, there is something in me that is all one consciousness, with some regions of it more malleable than others. The stiffer areas that we take to represent "raw sense data" are pretty undeniable. At the same time, and as part of that same moment of consciousness, there are other aspects that are free to play, to interpret, to want, to focus on this or that. External physical reality presses upon us so that parts of us assume a certain shape. Free will is the process by which I choose the next shape to assume. I can be influenced by a past state without being completely determined by it.

Free will exists if we broaden our notion of it to mean the qualitative, unitary creations of an all-at-once consciousness. But if we think of it as a detached observer deciding what to do about stuff it "knows", then free will is a useful fiction. The self that stands apart from the percepts is a construct, a simulation. The rest of the conscious field (the non-self) is descriptive information, stuff the simulated self "knows" or "perceives". We are subject to the User Illusion (see Nørretranders (1998)). We construct the self as part of the whole picture and attribute will to it. We are self-aware, but the self of which we are aware is a simulation. The whole has free will, but the simulated self only thinks it does.


Particle man, particle man
Doing the things a particle can
What's he like? its not important
Particle man

Is he a dot, or is he a speck?
When he's underwater does he get wet?
Or does the water get him instead?
Nobody knows, particle man
-They Might Be Giants

How Panpsychism Might Work

The Combination Problem

I and others have argued for the inherent incompleteness of sets of physical laws as descriptions of reality - such ladders of categorization of reality will always be missing the bottom rung. Moreover, we are confronted with a phenomenon, consciousness, that does not seem to have a natural home in the world that physics describes. I have also argued that so-called levels of organization buy us exactly nothing in terms of explaining consciousness: all "higher-level" aggregations or black boxes do for us is allow us to think of masses of low-level parts more effectively and conveniently, given our limitations. No explanatory power is given or taken away by thinking of the lower levels chunked up in one way or another.

Let's imagine for a moment that the panpsychists are right, and that some kind of crumb of proto-consciousness must exist down at the lowest levels of reality, along with mass, charge, and spin. (Although it is probably better to say that this crumb of proto-consciousness underlies or instantiates those other physical properties, or that it implements them willfully.) So how does panpsychism get around the famous combination problem? Even if at the lowest levels of physics, quarks are conscious, and quark behavior is implemented by quark consciousness, each quark is still a windowless monad, blindly knocking into other particles, interacting only by causal, functional dynamics, and we are back in the world of physics.

Some critics of panpsychism seem to think that this is a show-stopper. Given the causal closure of the physical world as described by our science, any quark-consciousness is confined to the quark level, and any scaling or integration into larger entities can only happen by virtue of good old extrinsic functional dynamics. This makes panpsychism the worst of both worlds: you posit something crazy like quark-consciousness, and it doesn't even help you explain human consciousness! If the human scale consciousness comes about by virtue of good old physical interaction, it would exist even if it were implemented by some other substrate than this purported quark-consciousness, in which case the quark-consciousness is epiphenomenal, and we know what William of Occam would say about that.

The combination problem is not quite the show-stopper that it is made out to be, however. It is more of an unknown than a can't-possibly-work. Somehow consciousness scales, but we don't know Nature's scaling principles. What counts as a thing in this regard? Is a spoon conscious? A pile of sand? The air in this room? Does proximity of particles matter? These are open questions, and ones that a panpsychist does not necessarily have to answer just yet.

That said, this is an area in which even my fellow panpsychists get a bit hand-wavy. My sense is that they don't want to bite the bullet. It's one thing to come out as a panpsychist, but another to take that extra step. Let's not get too crazy here. So the bullet I chew on is this: yes, there must be (proto?) consciousness down at the fundamental levels of phsyical reality. In addition, consciousness, as such, must scale from the quark level to full-blown, congitively rich, multi-modal human consciousness, in a way that is not [merely] described by the functional dymanics of causal bonking, networks of micro-parts obeying simple laws, however big and complicated those networks may be. Moreover, this big consciousness is causally efficacious.

I want to be clear about the caliber of the bullet I'm biting here. I'm going up against the oft-cited causal closure of the physical world. I speculate that there are certain kinds of systems that allow for some kind of scaling: true, really-there, inherent integration or conglomeration. These systems might be rare, but natural selection, in all its creativity, has stumbled upon and exploited this principle in brains. But yes - we should expect to see some configuration of molecules actually do something because of this consciousness that is not predicted by normal physics (although perhaps in a way that does not exactly contradict normal physics).

Particle Man

I think it would be cool to write a comic book in which the protagonist is a superhero: six foot four, barrel-chested, cape, square jaw, steely gaze. He can do anything he wants, unbound by the laws of physics, because he is an elementary particle. A big one. If you get close to him, you see that he is not made of cells, which are in turn made of molecules. He is just one big indivisible thing. He is an example of what William Seager calls a "large simple", or Galen Strawson (2009, p. 380) calls a "complex absolute unity". You've got your electrons, you've got your quarks, you've got your photons, and you've got Particle Man.

There has never been a Particle Man before, and there never will be again, so there is no existing body of laws that apply to him. Every moment of his existence, whatever he does is automatically a new law of nature, albeit a uselessly inapplicable one - the law would apply to the one moment of its coinage, and no other. There are no existing laws that define the mass of a Particle Man, so he gets to decide what his mass is at any moment. Same with his shape and size, his interactions with other stuff, his trajectory though space-time. He can collapse himself into a singularity, or he can spread himself as a fine matter-mist throughout the cosmos. He can wink in and out of existence.

Whatever he does at any time is just what Particle Man does at that time, and as such is a law of nature with all the rock-solid authority of Ohm's law. In fact, it is only whimsy on his part that he chooses to assume human form at all. It might be a pretty boring comic book, come to think of it - Particle Man would have godlike powers, far more than those of Superman (although the Green Lantern, in some interpretations, has approached this level of power).

Surely, though, Nature, which has been so tidy and parsimonious with its elementary particles and laws up to now would not create something so extravagant as Particle Man, casting off new laws willy-nilly, every instant of every day! Possibly, but our preference for neat, tidy, elegant systems, and our certainty that everything big and tricky must be made of things that are small and simple are not binding on Nature. Indeed, our preferences along these lines have guided us to astounding achievements over the years, but have also left us with some blind spots.

Now I do not think that Particle Man exists, at least as a whole man-sized solid thing. Rather, I suspect that a given unitary moment of consciousness consists of a Particle Man-like blob of - something. Consciousness just can't exist in a world made of pure physics as we understand it (if a world made of "pure physics" is even a coherent idea). Panpsychism must be true. But a conservative panpsychism that posits (proto) consciousness at the lowest level and keeps it there is stopped dead by the combination problem. We are forced to go out on a limb and speculate as to how consciousness - as such, not just as an "emergent" property of functional dynamics - scales up beyond the individual physical particles.

Quantum Holism And Chaos

Quantum mechanics tells us about different systems of small particles coming together into such blobs that, although born of complexes of smaller things, and destined to fall apart into subcomponents in the future, are, must be, one single thing as far as Nature is concerned. Quantum mechanics shows us real emergence, so-called strong or radical emergence in action, not just the perceived emergence of the flock "emerging" from the birds, or the liquidity "emerging" from the motion of the molecules of water. To be useful to a full-blooded panpsychism, it doesn't have to be a whole person, and it doesn't have to be very long-lived - it just needs to be enough of a new thing to be qualitatively unique and causally efficacious on some macro scale.

Chaos theory tells us that the universe is chock full of situations in which neighbors do not map to neighbors, in which tiny differences make huge differences - the oft-cited Butterfly Effect. The panpsychist's qualitative blob would not have to be very big at all to make the kind of difference we need it to make in order to push the brain around. Not only am I not a quantum physicist, but I am also not a neuroscientist. Nevertheless, it strikes me that in our chaotic universe, a neuron would be a fine place to look for microscopic changes having macroscopic effects.

Holism And Its Discontents

It is now generally accepted that 65 million years ago an asteroid struck the earth, and this impact resulted in the death of the dinosaurs. I remember learning when I was young, however, that when this hypothesis was first put forward, it met a lot of resistance. Apparently the mainstream geologists took a while to come around, even in the face of almost overwhelming physical evidence of such an asteroid impact. I have since read that at the end of the 19th century, even among the Harvard faculty, there were still some old geologists who believed in the literal truth of the Christian bible. These researchers looked for evidence of Noah's flood in the fossil record, for instance.

As the biblical literalists died out or retired, their views became an embarrassment to later generations of geologists. So much so, that even in the 1960's, long after any living geologist could claim to have ever met a colleague holding these views, the world of academic geology held onto a reflexive aversion to any explanations that seemed bible-adjacent, i.e. that involved single-day cataclysms. There would be no smitings, Great Floods, or destruction raining down from the sky in any of our theories, thank you very much. This more modern-minded cohort knew the kinds of theories they thought were nutty, but none of them had any individual memory of why they thought they were nutty.

We are all educated people living in a scientific age. There is a certain way of thinking about the world that comes naturally to us, since it has been drummed into our heads from elementary school onwards. We are so used to it now that we don't appreciate what a leap it was for us at one time, both culturally and individually. Children and primitive societies are natural animists, and anthropomorphizers. Cyclones are angry. The earth is a loving mother. Even rocks possess a certain stoic wisdom.

Eventually, after thousands of years, and lead by a few singular geniuses, we discovered a new way of thinking. This new reductive habit of thought consists of approaching every big complicated thing as an aggregate of small simple things that behave in consistent lawlike ways. Final causation was out, efficient causation was in. Since the time of Galileo and Newton, this kind of looking at the world has been spectacularly successful within its proper domain, and has lead to what is legitimately called the Enlightenment and the Scientific Revolution (the internet, vaccines, people walking on the moon, microwave ovens).

Here we are, a few hundred years later, and that revolution is still charging along, and we are all taught this way of thinking, whether we major in physics or not. We accept scientism as the default way of seeing the world. It is hard for us to imagine (or perhaps to remember culturally) just how hard-won and counterintuitive the new reductive ways of thinking are. Training ourselves to think like this was slow and difficult at one time. We have mastered it wonderfully, but it has left us with a residual knee-jerk reaction against anything that smells even faintly like the discredited old holism or anthropomorphism. Such ideas strike us as unseemly and embarrassing. Like the geologists in the 1960's, our self-imposed mental training has left us with a blind spot. Even if holism were staring us in the face, we would refuse to see it. Nothing like the zeal of a convert.


The Self

The Infinite Regress of the Cartesian Theater

One of the most certain truths in the world is Descartes' "I think, therefore I am". Descartes was so certain of the existence of some kind of essential self that Daniel Dennett coined the term "Cartesian theater" to describe the sense that we all have of being the audience enjoying the rich play of our experiences. The theater metaphor comes naturally to us. It sure seems as though there is a show going on, and we feel confident that there are lots of mechanical maintenance functions that our minds take care of "backstage". The show is the more or less coherent narrative of whatever is in the forefront of our attention at any given time. Moreover, we tend to believe in an enduring self, independent of our individual percepts. Sometimes this virtual "self" in our mind, the one sitting in the audience of the Cartesian theater watching our thoughts and percepts, is referred to as a homunculus. This is not necessarily to imply that most of us believe that the self or homunculus is an identifiable region of the brain like the pineal gland, just that at some level of organization, we naturally assume that there is a self that is separate from the stuff that self experiences, remembers, thinks about, etc. The Cartesian theater metaphor suggests that some process dresses up reality in qualitative costumes (or creates reality completely) and presents it to consciousness, or the self, and that the self just sits back in the audience and watches.

In the real physical world, a child learns from a very young age that everything within my skin is "me", and everything outside of it is "not me". There is a subject/object distinction. There is me, and there is the tree. When I want to move my arm, it moves, but when, by similar force of will, I want the tree to move, it stays put. When I smack myself in the arm it hurts, but when I smack the tree, it does not (or at least it does not hurt at the place on the tree where I struck it). It feels natural to carry this distinction over into the world of our own minds. When we speak of our percepts and thoughts, we still cast the situation these terms: "I" perceive the "tree", even when the tree is one that is created entirely in the mind. I question the appropriateness of the subject/object distinction in this case, however. In some sense, it is the very percept of the tree that is the "me", or rather it is the process of creation of that percept. To separate the self from the percept is to invite infinite regress.

For there to be a Cartesian theater with a homunculus in the audience, information must come in from our sense organs, thoughts must be generated and presented in some fashion to the homunculus, who then experiences them. The homunculus, then, has the same Hard Problem relative to this presentation that we do relative to our sense organs. Any distinction we can draw between the homunculus and the percepts, any line between some receptors (functionally construed) on the homunculus and those aspects of the percepts that these receptors are sensitive to, serves to push the whole problem down one more level, but doesn't solve it. We still have a problem of how the stimuli impinging on the homunculus come together in its "mind" to form the rich qualitative field of consciousness that it has. Perhaps it has a homunculus in its mind too, watching its Cartesian theater, and so on ad infinitem. Under pain of infinite regress, then, there can be no homunculus in the audience of the Cartesian theater separate from whatever is going on onstage. The self is just another part of our world-model, a hypothesized construct.

The User Illusion

We are subject to what Tor Nørretranders has called The User Illusion. In his book of the same name (1998), he lays out the explanation of the title (pp 291-293):

The engineers who developed the first computers did not put much thought into the user interface because all the users were professionals. So the computers looked cryptic and clumsy. Alan Kay writes:

"The user interface was once the last part of the system to be designed. Now it is the first. It is recognized as being primary because, to novices and professionals alike, what is presented to one's senses is one's computer. the 'user illusion,' as my colleagues and I called it at the Xerox Palo Alto Research Center, is the simplified myth everyone builds to explain (and make guesses about) the system's actions and what should be done next."

The user illusion, then is the picture the user has of the machine. Kay and his colleagues realized that it does not really matter whether this picture is accurate or complete, just as long as it is coherent and appropriate. It is better to have an incomplete, metaphorical picture of how the computer works than to have no picture at all.

So what matters is not explaining to the user how the computer works but the creation of a myth that is consistent and appropriate - and is based on the user, not the computer.

The computer currently recording this word presents the user with a sequence of texts organized into folders on a desktop. Lousy chapters get dragged into the trash can at bottom right. When the user wants to see if a chapter is too long, he can use the pocket calculator in the desk drawer.

But there are no folders, trash cans, or pocket calculators inside. There are just quantities of 0's and 1's in sequence. Indescribable quantities: A computers can contain many millions of 0's or 1's. But this is nothing that bothers the user; all he needs is to extract his work when he has finished it. The user can be completely indifferent to these enormous numbers of 0's and 1's. The user is interested only in what the user illusion presents: pages of a chapter, folders of completed chapters, folders of loose ends, correspondence, goofed sentences, and unorganized thoughts.

The user illusion is a metaphor, indifferent to the actual 0's and 1's; instead it is concerned with their overall function.

The claim, then, is that the user illusion is a good metaphor for consciousness. Our consciousness is our user illusion for ourselves and the world.

Consciousness is not a user illusion for the whole world or the whole of oneself. Consciousness is a user illusion for the aspect of the world that can be affected by oneself and the part of oneself that can be affected by the consciousness.

The user illusion is one's very own map of oneself and one's possibilities of intervening in the world. As the British biologist Richard Dawkins puts it, "Perhaps consciousness arises when the brain's simulation of the world becomes so complete that it must include a model of itself."

If consciousness is my user illusion of myself, it must insist that precisely this user is the user; it must reflect the user's horizons, not that which is used. Therefore the user illusion operates with a user by the name of I.

The I experiences that it is the I that acts; that it is the I that senses; that it is the I that thinks. But it is the Me that does so. I am my user illusion of myself.

Just as the computer contains loads bits that a user is not interested in, the Me contains loads of bits the I is not interested in. The I can't be bothered to know how the heart pumps blood around the Me - not all the time, at any rate. Nor can the I be bothered to know how an association occurs in the Me: the I would much rather know what it involves.

But it is not only the I experienced as our personal identity and active subject that is an illusion. Even what we actually experience is a user illusion. The world we see, mark, feel, and experience is an illusion.

There are no colors, sounds, or smells out there in the world. They are things we experience. This does not mean that there is no world, for indeed there is: The world just is. It has no properties until it is experienced. At any rate, not properties like color, small, and sound.

I see a panorama, a field of vision, but it is not identical with what arrives at my senses. It is a reconstruction, a simulation, a presentation of what my senses receive. An interpretation, a hypothesis.

This tracks very closely with what Daniel Dennett says. In fact, he has made the same point, in almost exactly the same way in his work. While I would say that Nørretranders does not seem to address qualia directly, and it can not be the case that all of consciousness is a user illusion (the buck has to stop somewhere), and I definitely disagree with the embedded Dawkins quote, I think Nørretranders is onto something here. The self that I think of as separate from my thoughts, percepts, memories, etc. is not quite what it appears to be - it is a convenient fiction, a simulation. There is still what it is like to see red, and what it is like to remember that Paris is in France, and that sort of thing is still mysterious. Its mystery is not dissolved, as Dawkins thinks it is, and Nørretranders possibly thinks it is, but the image of the self as audience watching the show has a huge asterisk next to it, at best.

The Players Are The Audience

As William James said, the thoughts are the thinkers. The memories are the rememberers, the experiences are the experiencers. While this must be true, when I see a red apple, the thought is not of a red apple; it is of an observer seeing a red apple. The self of which we are aware when we claim to be self-aware is a simulation, constructed as part of our perceptual and cognitive apparatus, built into the percepts. The actors on the stage are the audience. I am the scene on the stage of the Cartesian theater. James also suggested that instead of saying, "I am thinking" it might be more appropriate to say, "it is thinking", using "it" in the same sense that we use it when we say "it is raining." I might add to James's suggestion that in particular, it is thinking you. The sense of this is very well summed up in a quote by Johann Gottlieb Fichtes that I found on page 93 of Strawson (2009):

The self posits itself, and by virtue of this mere self-assertion it exists; and conversely, the self exists and posits its own existence by virtue of merely existing. It is at once the agent and the product of action; the active, and what the activity brings about; action and deed are one the same, and hence the "I am" expresses an act.

Sometimes I imagine the perceiver/self as a gelatinous pseudopod like thing, assuming the shape of whatever different thoughts that it has. This is also part of the motivation behind my comic book superhero Particle Man, whose adventures I won't recount here.

This notion also explains, to some extent, the troublesome second-orderliness of consciousness that motivates HOT theories: to see red is to know that you are seeing red. In general, it seems mysterious that experiencing is inseparable from knowing that you are experiencing, that you can't see the apple without also having a sense of yourself as an experiencing self. This mystery goes away if the self is a construct created specifically to bring about exactly this effect. We call the self into being precisely to be the subject of our experiencings, to give them an anchor, a point of view, to make sense of them.


Daniel Dennett

The time has come to talk about Daniel Dennett. He is the self-proclaimed captain of the "A" team, the king of the reductive materialists (he declared David Chalmers the captain of the "B" team). His manifesto, 1991's "Consciousness Explained", is an absolute must-read for anyone interested in this field. It is extremely clearly written, persuasive, and loaded with style, a dry wit, and fascinating facts and findings relating to the study of the human mind. One simply can not discuss philosophy of mind in any useful way without having some response to Daniel Dennett and his arguments. At the same time, one must occasionally rise above his characterization of his opponents as fearful, reactionary, silly people desperately clinging to their vanities about the human soul. It should come as no surprise at this point that I think Dennett is wrong, at least in some of his conclusions. It may come as something of a surprise, however, in this sharply divided field of inquiry, that I think that nearly all of what Dennett says in his book is right.

Things He Is Mostly Right About

Pandemonium

Dennett has no use for qualophiles like myself. This is part I disagree with. But the vast bulk of the book is concerned not with arguments against qualia themselves, but against the idea that there is some central executive in the mind, some special module (either anatomically or functionally defined) that constitutes "my consciousness", such that sensory inputs are distinctly pre-conscious on one side of the module, and memories or motor outputs are distinctly post-conscious on the other side of it. Instead, Dennett proposes what he calls the Multiple Drafts Model, according to which there are lots of modules (or agents, or, more colorfully, demons), lots of versions or portions of versions of sensory inputs, and it never exactly comes together in any one place or at any one time in the brain to constitute "my field of consciousness right now". Dennett often describes the mind as more of a pandemonium (literally, "demons all over") than a bureaucracy. He makes many persuasive arguments against the idea of a single central executive in the mind, and powerfully challenges our intuitions about our selfhood. But then he makes an abrupt right turn, concluding that therefore, qualia do not exist in any sense whatsoever.

According to Dennett's hypothesis, among the specialized modules in the brain there is a verbalizer, a narrative spinner (some people call this module or something like it the monkey mind; I think of it as the chatterbox). The chatterbox produces words, and words are very potent or sticky tags in memory. They are not merely easy to grab hold of, they are downright magnetic. They are velcro. The output of this particular module seduces us into thinking that what it does, its narrative, is "what I was thinking" or "what I was experiencing" because when we wonder what we were experiencing or thinking, it leaps to answer. The reports of this chatterbox constitute what we think of as the "self". Dennett says we spin a self as automatically as spiders spin webs or beavers build dams. This very property makes this chatterbox powerful, and gives its narrative strong influence in guiding future action, thought and experience, but it is a mistake to therefore declare it to be the Central Executive.

Self As Center Of Narrative Gravity

Dennett likes to say that what we call the "self" is really just a "center of narrative gravity", and as such, merely a useful fiction. In the same way, an automobile engine may have a center of gravity, and that center of gravity may move around within the engine as it runs. The center of gravity of the engine is perfectly real in some sense - one could locate it as precisely as one wanted to - but in another sense it does not really exist. It performs no work. It is what I might call a may-be-seen-as kind of thing, not a really-there kind of thing. Dennett thinks that the self is the center of narrative gravity in exactly this sense.

"Direct" Perception Vs. Judgment

Dennett makes a great deal of the difficulty of distinguishing clearly between experiencing something as such-and-such, and judging it to be such-and-such. In response to an imaginary qualophile, Dennett says, "You seem to think there's a difference between thinking (judging, deciding, being of the heartfelt opinion that) something seems pink to you and something really seeming pink to you [emphasis his]. But there is no difference. There is no such phenomenon as really seeming - over and above the phenomenon of judging in one way or another that something is the case." (p. 364) In a way he is right - it is a fascinating and fruitful problem. In his book, Dennett gives many examples that serve to undermine our faith that we really do experience what we think we experience, and there are many others that are not in his book. That said, I can't help but smile at the fact that even he used the qualitatively loaded term "heartfelt" in the way he did in the quote above - seems like begging the question a bit given the argument he is making.

Dennett says to imagine that you enter a room with pop art wallpaper; specifically, a repeating pattern of portraits of Marilyn Monroe. Now, we only have even reasonably high-resolution vision in our fovea, the portion of our field of vision directly in front. The fovea is surprisingly narrow. We compensate with saccades - unnoticably quick eye movements. Nevertheless, as Dennett says, we could not possibly actually see all the details of all the Marilyns in the room in the time it takes us to form the certain impression of being in a room with hundreds of perfectly crisp, distinct portraits of Marilyn. I'll let Dennett himself take it from here:

Now, is it possible that the brain takes one of its high-resolution foveal views of Marilyn and reproduces it, as if by photocopying, across an internal mapping of the expanse of wall? That is the only way the high-resolution details you used to identify Marilyn could "get into the background" at all, since parafoveal vision is not sharp enough to provide it by itself. I suppose it is possible in principle, but the brain almost certainly does not go to the trouble of doing that filling in! Having identified a single Marilyn, and having received no information to the effect that the other blobs are not Marilyns, it jumps to the conclusion that the rest are Marilyns, and labels the whole region "more Marilyns" without any further rendering of Marilyn at all.

Of course it does not seem that way to you. It seems to you as if you are actually seeing hundreds of identical Marilyns. And in one sense you are: there are, indeed, hundreds of identical Marilyns out there on the wall, and you're seeing them. What is not the case, however, is that there are hundreds of identical Marilyns represented in your brain. Your brain just somehow represents that there are hundreds of identical Marilyns, and no matter how vivid your impression is that you see all that detail, the detail is in the world, not in your head. And no figment [Dennett's term for the metaphorical "paint" used to depict scenes in his Cartesian Theater - figmentary pigment] gets used up in rendering the seeming, for the seeming isn't rendered at all, not even as a bit-map.

The point here is that while we may think we see the Marilyns on the wall, and we may think that we have a qualitative experience to that effect (just like our qualitative experience of seeing red), this is almost certainly not the case. Instead, what is happening is that we have inferred, or judged that there are Marilyns all over the wall, and we have a very definite, certain feeling that we actually see these Marilyns. Sometimes we think we directly experience things that are right in front of our faces, but really we just conclude that we have experienced them. Our inability to tell the difference is intended to make qualophiles like myself uneasy. I think I am being fair to Dennett to characterize his claim as follows: we think that our direct experience is mysterious, but often it can be shown pretty straightforwardly that when you think you are directly experiencing something, really you are just holding onto one end of a string the other end of which you presume to be tied to this mysterious experiencing. Given this common and easily demonstrated confusion, it is most likely that all purported "direct experience" is like this, that all we have is a handful of strings. We never directly experience anything; we just judge ourselves to have done so.

Dennett also discusses the blind spot in our visual field. There are simple experiments that demonstrate that a surprisingly large chunk of what we normally think of as our field of vision is not actually part of our field of vision at all. We simply can not see with the part of our retina that is missing because of where the optic nerve leaves the eyeball. The natural, naive question is, why don't I notice the blind spot? The equally natural, and equally naive explanation is that the brain compensates by "filling in" the blind spot, guessing or remembering what should be seen in that region of the visual field, and painting (applying more figment) that pattern or color on the stage set in the Cartesian Theater.

Dennett is quite emphatic that nothing of the sort happens. There is no Cartesian Theater, so no filling in is necessary. There is no such thing as seeing directly, there is only concluding, so once you conclude (or guess, or remember) what should be in the blind spot, you are done. There is no inner visual field, so there is no need for inner paint (figment), or inner bit maps. We do not notice the blindness because "since the brain has no precedent of getting information from that gap of the retina, it has not developed any epistemically hungry agencies demanding to be fed from that region".

There are also easily performed experiments that demonstrate change blindness: the phenomenon whereby you can be shown a photograph, and then shown an altered version of the same photograph and be unable to spot the differences. Often the differences can be pretty dramatic, far more drastic than you might think could possibly go unnoticed. Once again, you think you really see the first photo in a lot more detail than you actually do, but it turns out that instead you merely judged that you had seen it. You nailed down a few major details, decided that you had seen it, and that was good enough. Your confidence that you really see something is misplaced - you only think you see.

Dennett describes experiments in which people were fitted with goggles that turned their entire field of vision upside down. While "comically helpless" at first, soon the subjects were able to ski and bicycle through city traffic while wearing the goggles. Dennett says, "…the natural (but misguided) question to ask is this: Have they adapted by turning their experiential world back right side up, or by getting used to their experiential world being upside down?" Dennett holds that this is simply a wrong question, and in fact, the more completely adapted the subjects of the experiment were, the more they reported that the question had no good answer. The experience of the visual field is inseparable from your use of it, your cognitive interpretation of it.

Imagine a room. Do not do it by reference to a real room with which you are familiar, make up a room you have never actually seen. Do so in as much detail as you can. Take as long as you like. Got it? Now - is there crown molding around the ceiling? If so, what kind? What is the millwork of the baseboards like, if it has any? What kind of latches are there on the windows? If you are like most people, you thought you visualized the room in pretty specific detail, but when asked pointed questions about it, you are distinctly aware of making up your answers on the fly. You didn't really see the room in your mind's eye in as much detail as you thought you did.

Things He Isn't Right About

"Direct" Perception Vs. Judgment

These are very interesting examples and point to an important problem in our conception of the distinction between experiencing and judging. Often when we think we perceive a whole lot of detail directly, what is really going on is that we have cognitive access to a whole bunch of detail on demand, if we (or any of the agents, or as Dennett calls them, demons, that comprise us) ask for it, accompanied by a fuzzy sense of directly perceiving the detail "directly". But Dennett is wrong to jump from that to the conclusion that we never really experience anything, that its all (just) judging.

Materialists in general like to use optical illusions as examples. You thought you saw one thing, but it actually turned out to be another. Or even, your judgment about your perception itself turned out to be wrong, and if your judgment about your "direct" perception is fallible, well, that's the whole game, right? You certainly should not make sweeping metaphysical pronouncements based on something that could just be wrong. We can be mistaken in our judgments about our perceptions, but we can not be mistaken about having perceptions at all, or that we make judgments at all, which, after all, are more perceptions. The circle of direct experience may be smaller than we usually think, or it may have less distinct boundaries, but we can not plausibly shrink it to a point, or out of existence altogether.

It may impossible to draw a clear distinction between experience and judgment, but this is because judgment is itself a sort of structured experience. There is no naive experience: our interpretations are part and parcel of our perceptions. It is interesting that Dennett never clearly and simply defines "judgment". Computers do not know, judge, believe, or think anything, any more than the display over the elevator doors knows that the elevator is on the 15th floor. All they do is push electrons around. Even calling some electrons 0 and others 1 is projection on our part, a sort of anthropomorphism. It seems as though I see all the Marilyns; Dennett says no, I merely judge that I see them. He is right to force us to ask ourselves how much we really know about the difference. He is wrong to think that the answer makes either one of them less mysterious, or more amenable to a reductive, materialist explanation.

It is kind of like the difference between having a map showing a place you need to drive to, and having a list of directions to that place. You can follow the directions, turning where they say to turn, without ever forming any overall conception of where you are or where you are going. If the directions are sufficiently elaborate, they can even tell you how to get back on track if you make a wrong turn. You can simply follow them, and never "put it all together" into any bird's eye, directional sense of where you are. Could it not be the case that even when we do have a sense of where we are, say, in the middle of our home town, that sense is an illusion, and all we really have is a really good set of directions for how to get any place we might need to go? When it comes right down to it, is there any real difference between "directly" perceiving something in all its detail on one hand, and having on-demand answers to any questions you might pose about that thing on the other? Could it be the case that we think that we have an immediate, all-at-once conception or perception of something, but all we really have is an algorithmic process that is capable of answering questions about that something really quickly, a just-in-time reality generator?

If I think I have a conception of something, say, a soldering iron, could it turn out that really there is nothing but an algorithm, a cognitive module in my head with specific answers to any question I could have about the soldering iron? At any point, in any situation, the algorithmic module would produce the correct response to any question about the soldering iron in that situation. How to use it, what it feels like, its dangers, its potential misuse, its utility for scratching my name with its tip into the enamel paint on my refrigerator. Such a module would serve as a just-in-time reality generator with regard to any experience I might have involving the soldering iron. It would consist of a bundle of expectations of sensory inputs and appropriate motor outputs regarding the soldering iron. To use the computer terminology, as long as the soldering iron module presented the correct API to the rest of the mind, wouldn't the mind be "fooled" into thinking that it had a qualitative idea of the soldering iron, when all it really had was a long list of instructions mapping input to output? How do I know I have a concept of the soldering iron beyond the ability to form a whole bunch of judgments about the soldering iron, given the difficulty of distinguishing between being conscious of something and merely making judgments about it? Is it possible that after all, I simply do not have any holistic, all-at-once conception or perception of the soldering iron? And is there really any difference between the two ways of characterizing our cognitions regarding soldering irons?

No, It is not possible, and yes, there is a difference. When I see the soldering iron, I really do see it. If I look at a white wall with three black circles painted on it, I see them all before me. Chalmers once asked, what is it like to see a square? What is it like to look at the well-known Necker cube? For that matter, what is it like to see the word "cat"? There is judgment, inference, interpretation, and cognition here. There are associations, memories, connotations, and all the rest of the cognitive baggage. There is also experience. The mystery here is how they all relate. To what extent they are really the same thing? To the extent that they are the same thing, what gives rise to the intuition that they are different in the first place? How is it that some things that take place in the mind are more experience than judgment or cognition, and other things are more judgment or cognition than "pure" experience? What is the sense that I see all the Marilyns if not itself a quale?

The difficulty with cleanly distinguishing between "directly" perceiving something and merely judging it to be a certain way, (while having specialized modules for answering questions about it) is not limited to visual perception or perception at all, in the usual narrow sense. Nor is it limited to perception of the outside world. The same kinds of ambiguity exist with regard to our understanding of our own minds. I believe the sun will rise tomorrow. Do I really hold this single belief, or is it just a huge bundle of expectations and algorithms, each pertaining to specific situations or types of situations that I might find myself in? Any of the unitary things we naturally posit in our minds (models, images, memories, beliefs) could have some component at least of such a bundle of algorithms, or agents. For any such thing, what is its API to the rest of the system, really? How much can we really say about how it implements that API? Maybe I just infer somehow that I have a belief that the sun will rise tomorrow, but that "belief" is not nearly the short little statement written down somewhere that it seems to be. The articulation of the belief could, as Dennett suggests of all of our articulations, be the result of some kind of consensus hammered out by lots of demons or agents. Nevertheless, the sense that I have such a belief is real, and unitary, even if the belief itself is not. Frankly, I don't know right now what a belief is, or what a judgment is. Until someone convincingly gives an account of these things, it rings a little hollow to dismiss qualia as "merely" complexes of judgments or beliefs.

Strangely Swimming Conscious Demons

When I walk into a room I may not consciously notice each of the fire sprinkler heads mounted on the ceiling. Do I see them? Even after a good look around, I would likely flunk if quizzed about their exact number or arrangement, even though I feel as though I have seen the whole room, in all its detail. Dennett says that this feeling is illusory. I choose to say that the sprinkler heads do not intrude, as it were, on my consciousness because insofar as I care, there is nothing about them that should surprise, interest, or concern me. I've noticed them - if I had never seen or heard of a sprinkler head before, within a very few seconds upon entering the room they would command my full attention - but I've written them off at a relatively low level of perception. At some point in my life, I've noticed them, thought about them, stared at them during dull staff meetings, convinced myself that I more or less understand them - in effect, built a sprinkler head recognition agent. When I enter and scan a room, this agent is awake, active, but quiescent. Nevertheless, it contributes in some admittedly poorly understood way (by me at least) to where I'm at, consciously.

I have an overall sense that I see and comprehend the room. If I had the mind of a dog, I might still have a sense that I see and comprehend the room, even though the sprinkler heads never registered at all on any level whatsoever. My dog mind has no sprinkler head recognition agents. Nor does it have any particular curiosity about details it does not recognize. (no epistemically hungry agencies, to use Dennett's term). My human sense that I see the room and my satisfaction that I understand it are quite different than the dog-mind's sense, even though in the end we are both satisfied that we see and understand it. I see and understand insofar as I care, have ever cared, or could imagine caring about whatever it is I am looking at.

My own speculation is that the epistemically hungry agencies are conscious. Some are relatively permanent, some are constituted by constantly shifting, waxing and waning coalitions of other agents. The sprinkler head recognition agent feels quite clever, that it has made a really creative leap here - it has never seen these particular sprinkler heads, in this light, from this angle, in this context, yet it declared them to be sprinkler heads. It is always thinking about sprinkler heads, and always looking for them. It is always trying to see sprinkler heads.

So how do these "agents" stack? How do "lower" ones get incorporated into "higher" ones until they all get subsumed by the one at the top, the tip of the pyramid, the consciousness that is me? On this last question, Dennett is right. There probably is no tip of the pyramid.

When I look at my living room, I seem to have a certain sense that I see it before me in all its colorful, varied entirety. What is the connection between this "certain sense" and actually seeing it? My sense of seeing it is not an opaque ability to answer questions - I don't feed demands for information into a black box and get information back. It may well be, as Dennett says, that a pandemonium of demons (couch demon, rug demon, lots of other, more abstract demons concerned with context and associations) in some way contribute to my overall comprehension. Moreover, it may well be the case that this "overall comprehension" just is the pandemonium itself, not some master demon, or some Central Meaner. Maybe later, if asked what was going through my mind, the "I was comprehending my living room" demon may be overruled by the "I was worrying about my property taxes" demon. Maybe I was comprehending the living room, but come to think if it, I was paying special attention to the drapes. Or was I? Maybe any of the demons could make a good case that they were the whole point, the where-it-all-comes-together. From each demon's point of view, it is right. We have lots of seats of consciousness in our minds.

If all of the demons are conscious to some degree or another, if that term is to have any meaning at all, then there are some consciousnesses that never manifest themselves distinctly in any kind of a master narrative of "what was going through my mind". Perhaps some of them are evolutionary dead ends in the pandemonic Darwinian jungle that is my mind. Maybe some of them don't even nudge any of the others above the level of random noise or jitter, even though, for their possibly quite brief existence, they were conscious. There was something it was like to be them.

At one point (pp. 132-133) Dennett speaks of the impossibility of nailing down what you are conscious of and when you are conscious of it. He rightly points out that in many situations there is no good answer to the questions of exactly what you are conscious of and when:

We might classify the Multiple Drafts model, then, as first-person operationalism, for it brusquely denies the possibility in principle of consciousness of a stimulus in the absence of the subject's belief in that consciousness.

Opposition to this operationalism appeals, as usual, to possible facts beyond the ken of the operationalist's test, but now the operationalist is the subject himself, so the objection backfires: "Just because you can't tell, by your preferred ways, whether or not you were conscious of x, that doesn't mean you weren't. Maybe you were conscious of x but just can't find any evidence for it!" Does anyone, on reflection, really want to say that? Putative facts about consciousness that swim out of reach of both "outside" and "inside" observers are strange facts indeed.

Yes, yes they are, but there it is. There are, in fact, consciousnesses within my skull that swim out of reach of any demon or collection of demons that might generate utterances or typings about what "I" am or were conscious of at any particular time. This should not seem odd, frankly, even to a reductive materialist. However you define consciousness, assuming you find any use for the term whatsoever, why is it impossible, or even unlikely, that the submodules and sub-submodules that comprise my mind might themselves individually qualify as conscious? And if they do qualify as conscious, they might not all necessarily be patched into any larger consciousness, or feed into any higher level of consciousness. Of course the ones that do are probably more interesting to us, and how exactly they feed in is a subject for further speculation. And perhaps some of them spin off on their own until asked a certain way, or until the right kind of slot opens up for them to contribute their bit. Recall Dennett's compelling image of constantly shifting coalitions of demons. So it should not seem silly or bizarre that, in some sense, I was conscious of a stimulus but didn't know it. Or perhaps the "I" that reports on such things did not know it, or know it in the right way.

Dennett is right - the single continuous self is illusory, a virtual machine implemented on a parallel architecture. He is wrong, however, in thinking that this explains consciousness or dissolves its mystery. Far from it.


Beyond the Cartesian Theater: More Better Models and Metaphors

Before we can have a theory about anything, a theory that makes quantitative, falsifiable claims, we need some kind of model. A model is less rigorous than a theory, really just a way of thinking about something. Climbing down the ladder of rigor still another rung we get to metaphors. A metaphor is just a turn of phrase. Like a model, a metaphor is a way of thinking about something, but unlike a model, it doesn't pretend to be complete but merely lacking detail; rather it is completely flimsy, and if you look at it from the wrong angle, or push just a little too hard, the whole thing collapses.

Right now we have no idea how to think about the mind, but we have to think of it in some way. We can't help but think in terms of metaphors, and if we choose the wrong ones, we end up channeling our thinking in certain directions and not in others, making assumptions without justification and overlooking whole classes of possibilities. No matter how well we know that we are speaking loosely, our mental images and metaphors have a creeping influence on our subsequent speculation. As you speak, so you will think. Moreover, the more deeply you want to go with your investigations and speculations, the more this kind of thing matters. You can have a somewhat wrong model in mind and still come up with perfectly accurate empirical predictions. When you are trying to rethink things in a fundamental way, however, when you are trying to make conceptual leaps and nail down what might be true, what must be true, and what could not possibly be true about the really big things like consciousness, the wrong metaphor, or simply the lack of the right one, can really derail you.

What we would like is a large and varied repertoire of metaphors to draw from, with some awareness of each of their limitations. We've already seen the Cartesian Theater, which Daniel Dennett (Dennett 1991) is right to dismiss as a valid metaphor for the mind. He is also right in saying that it is very difficult to exorcise it from our thinking about minds. Here I would like to discuss some models that I find fruitful, with caveats. This all gets a bit speculative, but not necessarily in the hardcore metaphysical sense, more in a meat-and-potatoes, easy-problem sense. The goal here is to come up with images and words we can use when applicable, but with our eyes open, so that eventually we may be able to come up with an actual theory that is cognitively, phenomenologically, and biologically plausible.

Computers

These days we can't help but think of the mind in terms of computers. In spite of everything I have written about consciousness, I still do it all the time, and I even think it is often useful to do so, as long as you are aware of the problems with it. The computer is simply the dominant metaphor for minds (and lots of other things as well) just as the clock and the steam engine were dominant metaphors in eras past. (A pretty good discussion of the ascendancy of the computer model of the mind can be found in P. Cisek's article in the Nov/Dec 1999 issue (Reclaiming Cognition) of the Journal of Consciousness Studies.)

In a sense, the more you actually know about how computers work the longer the period of intellectual detox must be. You just can't help phrasing things in terms of live processor vs. dead data, a centralized point of control, search and sort algorithms and the like. I really like Dennett's pandemonium/fame model of mind as an antidote to this (more on this later). Dennett makes the point that the serial von Neumann type machine (a fancy way of talking in an idealized, abstract sense about "the computer" as we understand computers) is highly artificial even in our own minds. That is, our minds, as they evolved, did not start out thinking that way. Serial, von Neumann type thinking was a great trick, but one we probably learned relatively late in the evolutionary game - it is not the natural architecture of minds. As Dennett puts it, our mind is a serial machine simulated on a massively parallel substrate. It is this type of linear, symbolic thought that we get presented to us when we try to introspect "how we think". We went on to build computers to mimic this naive impression we have of our own thought processes, and obviously it has worked out quite well for us in lots of ways. But to imagine that we can capture the essence of thought itself that way is a beguiling mistake.

Let me say up front that I am aware that I am putting up something of a straw man argument against computers here. There are many, many different computer languages, and many, many different computing environments, and I am lumping everything together as if it were the 1950's, and "computer" means straightforward, old-school linear programming, of the type we deal with in languages like C or assembly language. I'm old-school - this is largely how I think of computers. At their core, (or these days, their cores) computers darn well are straightforward, old-school machines. In such a device, there is a linear thread of control, laid down by the human programmer. Control may pass to a subroutine, which may do all kinds of complicated things, including passing control to still other subroutines, but when it is done it issues a "return" statement of some kind, and control pops back up to the calling code. (Yes, it really is called popping. Popping the stack, as a matter of fact).

I assume that I need not go into too much detail as to why the computer is such a compelling metaphor. It certainly seems as though they do a lot of the stuff that minds do; after all, we designed them to. They process information, they remember stuff, they seem to know stuff in some sense or another, and you can program them to do seemingly any cognitive task you can imagine, if you just have enough memory and CPU cycles. I remember, upon first learning to program, feeling the same confidence that the early researchers in the 1950's felt that I should be able to whip up a functioning AI.

There are, however, some problems with computers. A computer is the quintessential functional device, with each module defined in terms of its producing the right outputs given the right inputs in the right state, and all the modules being integrated in a purely causal fashion. I've already spoken about functionalism, and then there is the whole Hard Problem what-is-it-like-to-see-red thing. But there are other reasons why a computer, at least a traditionally programmed one, is not quite the model we are looking for.

How Could A Computer Be More Like A Mind?

So before we dismiss computers altogether (and I emphasize again that dismissing them for good would be nearly impossible as well as counter-productive, since we will always be drawn to the extensive vocabulary of computers when talking about minds) let's think about them for a bit. Ignoring for the moment all of the what-is-it-like-to-see-red arguments, why are computers not like minds? In particular, if we were to design a brand new computer language, or a computer environment (like an SDK - Software Development Kit) that would allow people to tinker with AI, what would it look like?

For starters, the units of processing (threads/routines/processes) would be able to connect with each other with a higher probability of producing something coherent than today's computers would. That is, a computer executing a program walks a knife edge of coherence surrounded by a sea of chaos. Drop one bit, or walk off the end of an array, and you are simply done (General Protection Fault, app quit unexpectedly). Computers are very chaotic systems (neighbors do not map to neighbors) - change one thing about a computation and you get drastically different results. This is not generally how organic things work, including brains. I think that if we were to create a development environment/language for true AI, the fundamental bricks out of which the system created algorithms would be hardier. They could knock around, invoke or connect with each other if not randomly, at least somewhat randomly, with a greater probability of creating something that would run.

Building on this idea, it would be nice if our computer system were more non-destructively, or even constructively, destractable than computers generally are. Mental history is written by the mental winners. Thought only seems logical, rigorous and inevitable in retrospect. As thoughts actually develop, they are marked by constant distractions. Some are dismissed, some are pursued, and some fall in between, nudging or coloring other threads of thought. We are interrupt driven, and interrupts come not only from outside, but from inside as well. When we drill down or call a subroutine, we might take off in a whole new direction, and never return (or never quite return) to the original thread. Somehow, "low-level" routines can derail the "high-level" routines, but not willy-nilly. The system works, after all, and I can drive to work and bake cookies and not forget what I am about. Somehow, in minds, there isn't this distinction between low-level routines and high-level ones, such that high-level routines call the low-level ones, which are bound to do exactly as they are told and return to the caller. High-level and low-level in minds is a bit of a two-way street. This suggests that the mind is more like a fractal - patterns get applied at high and low levels, and there is no such thing as a low-level utility routine, only patterns that can be applied at any level. Or even, perhaps, patterns that apply themselves at any level. It might be a good exercise to try to envision how computation might work freed from the tyranny of the "return" statement.

It also seems to me that for a computer to behave like a mind, it would have to possess more of a memory, not just a memory of the type that computers already possess. Computers store numbers in memory in a very deliberate way, treating memory like a chest of drawers. Put the red socks in this drawer, take them out of that drawer later. In a computer that acted like a mind, however, the machine would actually maintain the vapor trails of past thoughts themselves. This sounds a little like a CPU cache, but whereas the cache is invisible to the software by design, this would not be. Past thoughts, including their flailings and dead ends, would be stored as objects that can later be held up to the light and examined. It is misleading to think of the mind as a tiny spark of active CPU, that accesses opaque memory by address and plucks the datum stored there and loads it into a register. The way a computer uses memory is like a warehouse filled with crates, and a big overhead claw that can grab any crate and carry it to a central platform. That's just not the way memory and mind work at all.

In a mind, as new ways of thought are cobbled together with routines, with threads or submodules invoking each other, the pattern of linkages is abstracted from the particular problem at hand and stored as a whole new invocable algorithm. Somehow, what is stored must not be just the beginning state and the ending state of a particular function or transformation, but the process of the transformation itself, or what it is like to perform it. To mimic this, would would have to blur the distinction between algorithm and data, so that our system works with blobs of algorithmic data stuff. In fact, I think that there is likely no distinct line we can draw in actual minds between data, algorithm, and the execution engine (CPU) itself. In a mind, data gets applied to new situations; data gets applied to other data; algorithms get applied to each other; algorithms are treated as raw material for other algorithms. A mind gets to see an algorithm all at once, to comprehend it, which is a far different thing than merely executing it. We humans, for instance, have a much easier time with the Halting Problem than computers do, usually. The hard and fast distinction between memory, data, and CPU is the most insidious effect of using the computer as a metaphor for the mind.

Dennett's Pandemonium

In "Consciousness Explained", (Dennett 1991) Daniel Dennett proposed a so-called pandemonium model, that partially echoes Marvin Minsky's Society Of Mind idea (Minsky 1985). It is a good one, worth reprinting here:

There is no single, definitive "stream of consciousness" because there is no central headquarters, no Cartesian Theater where "it all comes together" for the perusal of the Central Meaner. Instead of such a single stream (however wide), there are multiple channels in which specialist circuits try, in parallel pandemoniums, to do their various things, creating Multiple Drafts as they go. Most of these fragmentary drafts of "narrative" play short-lived roles in the modulation of current activity but some get promoted to further functional roles, in swift succession, by the activity of a virtual machine in the brain. The seriality of this machine (its "von Neumannesque" character) is not a "hard-wired" design feature, but rather the upshot of a succession of coalitions of these specialists.

The basic specialists are part of our animal heritage. They were not developed to perform peculiarly human actions, such as reading and writing, but ducking, predator-avoiding, face-recognizing, grasping, throwing, berry-picking, and other essential tasks. They are often opportunistically enlisted in new roles, for which their native talents more or less suit them. The result is not bedlam only because the trends that are imposed on all this activity are themselves the product of design. Some of this design is innate, shared with other animals. But it is augmented, and sometimes even overwhelmed in importance, by microhabits of thought that are developed in the individual, partly idiosyncratic results of self-exploration and partly the predesigned gifts of culture. Thousands of memes, mostly borne by language, but also by wordless "images" and other data structures, take up residence in an individual brain, shaping its tendencies and thereby turning it into a mind.

I have already argued against adopting this model as the truth, lock stock and barrel, but I like aspects of it. I tend to agree with Dennett that thought is a lot less linear and single-threaded than we think it is, and that there are a lot of competing/cooperating specialist circuits at work. Dennett evocatively calls these "demons". This actually hearkens back to the idea of computer processes, as certain types of these are called "daemons" in UNIX and Linux.

Voltron I accept that there could be lots of demons in my mind, perhaps that entirely make up my mind. It is certainly a great metaphor. But again, we can't let it limit us, and we must think hard about the ways the things in our minds might not be like individual, you know, demons, or anything subject to the constraints that actual biological beings might live and die under. How many demons are there? What delineates a demon? How do new ones come into existence, and do they ever die? To what extent do they compete, and what happens to the losers? To what extent do they cooperate? Some people use terminology that suggests that the mind is a "Darwinian memosphere". Does natural selection work on demons in the same way that it does on species? Can demons mate and produce offspring? Or can they simply merge together like Voltron? Do individual demons change over time, or adapt?

Dennett uses the term "coalitions" (there's that parliamentary thing again). After allying themselves to accomplish something, does the coalition fall apart into exactly the same set of individual demons that went into it, or does the Voltron/coalition demon get to stick around, added to the menagerie along with its component demons? Perhaps a demon can sort of will itself to have new powers, unlike biological beings. Maybe they reproduce in a way that is more like mitosis than sexual reproduction. To what extent do patterns of relations between demons harden and become new demons themselves?

What do demons want? To the extent that they compete, what resources are they competing for? How, if at all, do they stack, or nest, or apply themselves to one another? What are the channels of interaction between them? Do they all have mutual visibility, as if they are sitting in a huge stadium watching each other? Does any one demon, or coherent coalition, hold the floor at any given time? It may well be the case that the channels of communication are not instantaneous, and not all global, and that the longer a given signal sticks around, the more broadly it gets propagated. Stability and consistency of a signal have a way of focusing the mind. Are the signals themselves between the demons just more demons? If not, how should we think about them? Or is it really more like a jungle, in which demons happen across one another from time to time, interacting sporadically?

Phenomenological Plausibility of Demons

Why does this whole demon/memosphere idea seem even vaguely plausible in the first place from a first-person point of view? As we go about through our lives, we have a sense that our mind is not monolithic, that there are parts of it working away offline. Not only do results of these offline computations pop into our main stream of consciousness, but there is a definite sense, in me anyway, of a whole train of thought, in all of its what-its-like-to-see-red glory, being plugged into whatever else I was thinking about or experiencing. Such trains of thought come complete with a sense that they didn't just come into existence at the moment "I" became aware of them, but that they had been developing on their own for some time.

Have you ever been listening to an oldies station, and heard a song that you have not heard in years or decades, but had the distinct sense that the very same song was going through your head sometime in the past week? Of course you have. I have had this sense suspiciously often. Often enough, in fact, that I have a hard time believing that I actually just happened to be replaying all those songs consciously in my memory in the few days before I heard them on the radio.

Now of course this sense could be an illusion. As with deja vu, I could be misremembering, mis(re)constructing my own mental history. But let's go with this for a moment. This palpable sense of past mental history that gets retroactively grafted onto your "main" consciousness makes a lot of sense if your consciousness is made of semiautonomous demons. I think that all of my song memories are playing all the time, but "I" am not aware of them. And if song memories work this way, what other memories are on hot standby? Is there a "Dancing In The Moonlight" demon, who just sings that song all the time, forever until you die? I can't rule it out. It may be that all of our old moments of consciousness are still in there, as some kind of standing waves.

Antimemes

How do you ever get a thought in edgewise, with all these demons singing? Not to mention the ones thinking, remembering, and sensing your shoes through the soles of your feet. I suspect that you (or perhaps I should go with the scare quotes, "you") tune them out. Like the jackhammer outside your window, it's not as if the demons go away or stop, but after a short while they just don't impinge upon "you" anymore, unless it would be a good idea for them to do so. But what are the forces that determine how good or bad an idea it is for them to seize the stage (there's that Cartesian theater again)?

I think it is entirely likely that all of the demons are conscious to some extent or another, whether or not "I" am conscious of what they are conscious of. I am legion and I contain multitudes. I know that some consciousness happens, but I don't necessarily know how much more consciousness happens that "I" don't (need to) know about. At one time it took a lot of concentration for me to tie my shoes, but now I could almost do it in my sleep. I constructed a tying-shoes demon, and when I tie my shoes, somewhere in my mind, it is hard at work, concentrating like mad (although I can willfully focus my attention on the act of shoe tying and make it conscious).

Active percepts, not just past memories, are demons. You tune out the actual shapes of the trees on the side of the road as you drive to work each day, the colors of the houses on your street, etc. Your eyes pick up all these details (that is, the corresponding photons do actually strike your retinas), and somewhere there is a perceptual demon who, according to this way of thinking, is exquisitely conscious of all that stuff, but "you" aren't aware of it, unless there is a conscious effort at attention to such details.

I have a strong suspicion that a great deal of the mind's activity is inhibitory. We spend an awful lot of effort shutting down streams of information, and channeling activity, blocking and constraining. If you sense another metaphor coming on, you are right. It strikes me that, to borrow an image from the memeticists, the mind is like an organism under constant assault by viral memes (demons). We tune out the singing demons by quickly developing antibodies to them. If the "Dancing In The Moonlight" demon sings the same song in the same way for too long, we jam the signal by installing a counter-signal, a counter demon. We handicap; we compensate. It doesn't stop, but we accommodate it by adjusting for its constant presence. And of course, even though I speak of singing demons, this goes for the remembering-my-childhood-cat-Fluffy demon as well, and the demons that notice the trees along the highway. The demon and its meme-jamming anti-demon are locked in a self-canceling embrace forever, leaving the mind as an intricate balance of tensions, like a bicycle wheel.

Each of the demons wants to dominate, to grab center stage. In order to maintain a thought or percept at all, you must be good at developing and maintaining antimemes, of countering all of the different assaults and distractions from the different demons. Imagine the little Dutch boy of the fable, with a million fingers in a million holes in the dyke. Or, you wait until the whole system reaches some kind of equilibrium for some amount of time. If some demon really wants to apply itself, maybe you just let it.

This idea of demons/antidemons (or memes and antimemes) respects a couple of ideas. Most importantly, it respects a sense of holism in the mind. The mind, according to this conception of it, really is one unified thing, with a balance of tensions keeping much of it more or less inert at any given time. This image helps make sense of what Ned Block calls perceptual overflow, those conscious-but-not-conscious scenarios people have devised over the years: the ticking clock you are not aware of until it stops, the pattern of the design on the carpet, the sensation of your socks against your ankles. Your "peripheral" awareness of such things is in there, and part of your overall conscious field, but neutralized by an antimeme.

Carried further, it may well be the case that our whole mind, every sensation and even every memory, is just like that: always right there, as part of your all-at-once now, but tuned out. The cacophonous demons are not off somewhere, each in its own soundproof room. They are all there, all the time, fully patched in. We actively exert ourselves to cancel them out, and this exertion is a collective exertion, performed by other demons: the mind as a self-policing pandemonium.

Darwinian Memosphere of Demons

I like the idea of active demons, even if most of them live in the shadows most of the time, rather than the mind as presiding over a mountain of static data. The mind is clearly great at parallel processing, but even that understates the situation, I think. When we are given a fact, say, that contradicts something we know, even somewhat indirectly, it is remarkable how quickly we notice. If we learn something new and surprising about cars, it is hardly plausible that we serially run through all the thoughts and memories involving cars (individual cars as well as general car knowledge) in our minds and adjust them accordingly. Facts, memories, are recalled as needed, as if by magic. It certainly seems as if the old fact jumps up, as if offended, to take on the newcomer. Old thoughts are less like dead data waiting to be accessed, searched, sorted, or applied, than like little sparks of mind themselves, capable of asserting themselves.

I suspect that there is no single burly stagehand guarding the stage, screening demons who want their moment in the spotlight. Demons can jump on and apply themselves to any detail of a new or developing stimulus (thought or percept) that catches their fancy. In this way, they get to flesh out the "focused-upon" detail more fully. However, over-eager demons get smacked down. Demons can jump on the stage, applying themselves whenever they want, but there is a cost. If they are just spamming, grabbing the spotlight when they have nothing to contribute, they either strengthen the counter-demon response and get tuned out extra in the future, or they get corrupted somehow. In order to survive intact over the long term, demons must tiptoe through the minefield of existing demons on the way to the stage without stepping on anyone else's tail or hoof.

You know how sometimes you remember an event from the distant past, and you are not sure if you are actually remembering the event or remembering your subsequent remembering of it on other occasions? Your memorable recalling of it in the past has effectively jammed the original memory. Any toehold or reference tag that would have triggered the original memory will also now trigger the memory of the memory. The original has been masked. Was Fluffy yellow? I always thought of him as yellow. But Mom has a photo and he's black. Oops. If you have to bribe too many stagehands to get onstage, you may find that you have nothing left once you get there. Demons who cry wolf get ignored later (or countered more vigilantly).

I speculate that different demons have different niches in the memosphere. Some are swaggering alphas, that apply themselves broadly and promiscuously to whatever processing that needs doing, while some are rarely seen, and just stay in their tiny niche, with very specific criteria for activation. According to this notion, a swaggering alpha's identity may be so smeared out and indeterminate that it hardly has an identity left, just the barest shape of one, a tone or coloring it can impart. (The notions of causation or object permanence might be such demons.)

While at the other end of the spectrum, the den-dwelling, seldom-seen demons get to keep their specificity in sharp detail (like specific episodic memories, or particular skills). Perhaps the alphas are more appropriately seen as eager beavers, willing to trade quality and specificity for sheer quantity and frequency, whereas the den-dwellers make the opposite call. As in nature, different demons employ different strategies and make different evolutionary compromises, until just about every conceivable niche is filled.

Just as it may weaken or corrupt a demon to apply itself overly broadly, demons may be similarly insulted by allowing other, incompatible demons to pass onto the stage. There may be, for instance, a demon that enforces or embodies what we think of as a valid chain of logical inference, and it will not tolerate another demon that violates its criteria for a valid chain of inference to ascend. To allow such a thing would be to make it less likely that the valid-inference demon would be allowed to ascend in the future. In this way, demons collectively constitute rules or constraints on each other. A truth I am certain of, or perhaps even a symbol I know how to interpret may be a rock-solid demon that will simply always win competitions with other demons.

What about the mind's current state makes it enticing for the next demon or coalition of demons to make a play for the stage? What sorts of current thoughts create a hospitable niche for subsequent thoughts? I suspect that the answer is far from deterministic, or rather, that it is chaotic: you never know what details or seemingly unimportant aspects of a thought or percept will grab hold of a demon's fancy and take you in a whole new direction. In particular, it is not necessarily the overall big idea or perceived direction a current thought is going in that subsequent thoughts hook into but those aforementioned distractions, even if most of the distractions do not go anywhere interesting and wind up being dead ends. Moreover, I think that demons do not necessarily have a preference for high-level deployment as opposed to low-level filling out the detail of some thought or percept - they just like a good fit.

So: the demons that are maximally compatible with all the existing demons are allowed to take the stage. Each new moment of consciousness is new and unique, however. So it is not the case that old demons simply get to relive their glory days in the spotlight; more likely they get to inform the creation of a new demon - they get to be the primary parent, or chief architect. Each incumbent demon is like a craftsman, or a specialized muscle that shapes a new demon. Although some are more like specific memories, some are more like general facts or general strategies. Some are more algorithmic/prescriptive, and others more data/descriptive, on a sliding scale. Each has a bit of "what is it like?" and each has a bit of "what does it do?". Indeed, it is hard to separate the two aspects. When I drive down my street and see an object in my field of vision that ultimately resolves to "house", it is probably not the case that my "house" demon simply grabs the spotlight; more likely it helps spawn a new moment of consciousness, a new, yet distinctly housey demon.

We often speak of our minds containing models: models of reality, models of self, models of my cat, etc. What sense can we make of such talk if our minds are constituted by demons? Are there models at all, if each new moment of consciousness is whipped up on the fly dynamically? I feel comfortable saying yes. Any model is a black box with an interface. You ask certain questions in the right way, and the model gives you consistent answers. A model may be implemented by a static table of bits or a database, with a relatively mechanical query engine, or it may be implemented by a raucous parliament. Our "models" may not be as model-like as we suppose.

This pandemonium image blurs the distinction between immediate sensations and memory, which to my way of thinking is one of its virtues. Memory is smarter and more active than is generally supposed. Memories are not in cold storage, off in a file cabinet, but right in your mind now, pressing on your consciousness.

Finally, this image bolsters an intuition that the ability to get bored is essential for intelligence. As you are assaulted by the same thoughts, you tune them out. You cease to be conscious of what isn't novel. Could autism be somehow related to meme immune deficiency disorder, a failure to get bored?

So what are the selection criteria for letting demons on the stage? Which demons do get promoted to the inner circle? Whoa - what inner circle? Alright, yes, there is no Cartesian theater, not exactly, but even Dennett acknowledges that there is something like a consensus that forms (pretty quickly at that) about what the narrative center of gravity is (or was) at any moment in my mind. This may be an artifact of the narrative-spinner demon, the chatterbox, and may not mean as much as it seems with regard to what "I" am thinking, but there is something to the notion that I thought about Fluffy today, but had not in the week before today. There is something like a spotlight of attention on certain trains of thought, even though (as I suspect) there are lots and lots of other trains of thought going on at the same time. So with my invocation of the Cartesian theater, here is an example of my using a discredited metaphor because, dammit, it helps me say what I want to say.

The Spotlight of Attention

While I am here, I should just say that the proverbial spotlight of attention is a bad metaphor, even though I just used it. Attention is actively created, not passively observed. The spotlight metaphor wrongly implies that the thing attended to in the mind already exists, in all its detail, in the dark before the spotlight is shone upon it. In a way, the image of a spotlight of attention is a continuation of the Cartesian Theater. Where was the last time you saw a spotlight, drawing your attention? Probably the theater. Rather than imagining that all of our thoughts, percepts, memories, etc. are all there, fully realized, but in the dark until their moment in the spotlight, it is more likely the case that we function as a sort of just-in-time mental reality generator, creating things on the fly as we "turn our attention to" them. That said, it is hard to stop using this image, just as with the Cartesian Theater itself, for the same reason. There is some sense in which "I was thinking this" or "I was not aware of that, but I am now, for purely internal reasons."

The idea of demons having to pay a cost for inappropriate activation may help improve the "spotlight of attention" metaphor. As a demon, you get to create the spotlight any time you want, making other demons conform to you, just as any loser can pull a fire alarm. Seizing attention is really a way of corralling or bullying other demons into trying to apply themselves to you, even at a cost to themselves of less-than-appropriate activation. Depending on the situation, seizing attention is like issuing an "all hands on deck" with more or less urgency. Attention, then, isn't some spotlight being shone on a particular demon, but is the collective combinations of lots of demons, perhaps with one at the center as a ringleader or catalyst. Things like pain or a threat tend to focus the attention. This may be a way of having one imperative light a fire under all of the demons, in effect shouting at them, "I don't care if this doesn't fit your criteria of applicability! Find a way to apply yourselves to this situation, however suboptimally to yourselves!"

Synthesis/Analysis Feedback Loop

There is one more wrinkle that I want to add to the pandemonium model now. One one hand, there is this idea that parts of my mental processing are performed by somewhat autonomous demons. On the other hand, there is a strong sense that there is some kind of "me" that has some kind of continuity, and that it is intimately connected to my thoughts and percepts, that there is some kind of deep holism at work. Thoughts and percepts have a sort of e pluribus unum, one-yet-many quality, to the point where it seems that in some sense, I am my thoughts and percepts. Even if it is (merely) a narrative center of gravity, there is a sense in which I am conscious of seeing a veterans' monument in the center of town, or I'm not. What does it take for some percepts or constructions of the mind to claim center stage, to command the spotlight? Or should I say, manifest the spotlight?

As I am taking in a complex percept, I have to synthesize perceptual fragments into some kind of whole. Different sense modalities get bound; I discriminate edges, light and shadow, colors, then shapes, tables, chairs, pine-scented air fresheners get recognized as such individually as well as belonging in the larger context. There is no naive perception, so along the way, I (or my demons) do all kinds of scrubbing, smoothing, guessing, extrapolating, etc. I am convinced that even pretty simple perception is more creative than it is generally given credit for.

We get a lot of messy, noisy, patchy data from our senses, and various demons (or committees of demons) take a stab at cobbling different parts of it together into larger coherent (to them) chunks, discarding outliers, making inspired guesses. Eventually, they synthesize a whole bunch of data into a single, unified percept, complete with tendrils of association and valence, framing and background knowledge: ah, a veterans' memorial. On the way to that unambiguous, stable, solid interpretation, however, there was a lot of thrashing around.

Whatever they come up with as a single interpretation, that final, unified percept, is only a first pass. This recalls Dennett's Multiple Drafts idea, although he is a bit vague about how rough drafts get edited. I imagine that as soon as anything like a draft emerges, it gets attacked, more or less. Demons try to break it back down again, along fault lines that they choose, not necessarily into the original components it was synthesized from. This becomes an iterative loop, with the same percept being built up and broken down, with possible subloops happening along the way. Stability (the "final" draft) happens when the result of the synthesis phase of the process no longer differs in successive loops - a consensus has been reached.

For unambiguous input, there are few iterations, little demonic controversy, and the processing is more or less automatic and unconscious. It is the more ambiguous, complex cases that take longer to stabilize, that end up engaging more and more demons. This is a slightly different take than Dennett's fame in the brain, in that it's the battles that get famous.

OK, so I exceeded my mandate as laid out earlier. I said that my goal here was only to expand our vocabulary a bit and give us some colorful metaphors and figures of speech, and I ended up advancing an actual speculative hypothesis. Dennett's pandemonium is a good model, and something like it is quite likely the truth, but it needs fleshing out. I am trying to make it as plausible as I can while stepping carefully around its mysterious bits. As always, the danger with any model is that it ends up sneaking assumptions in that bind us and blind us.


Cognitive Qualia

Even people who accept the Hard Problem as real still often make a distinction between cognition on one hand and qualitative subjective consciousness on the other. Cognition, presumably, is amenable to analysis in terms of information processing, and may in principle be performed perfectly well by a computer. It encompasses Chalmers's "easy problems". Subjective consciousness, or qualia, is the answer to "what is it like to see red?". Qualia is the spooky mysterious stuff that no purely informational or functional description of the brain will ever explain.

I would like either to clarify or eliminate the distinction. What exactly do we mean by "cognition"? When we speak of cognition in a computer, is it really the same thing that we are talking about when we speak of cognition in a human being? When we speak of "cognition" and "qualia", what are the distinguishing characteristics of each, such that we can be sure that some event in our minds is definitely an example of one and definitely not the other? The line between what we experience qualitatively and what we think analytically or symbolically is very hard, if not impossible, to draw. Even with the most purely qualitative impression, there is a troublesome second-orderliness - there is no gap at all between seeing red and knowing that you are seeing red.

Philosophers often talk about intentionality, which is the property of being about something. That is, something has intentionality to the extent that it is representational, or symbolic. Among the things that are often cited as being intentional are beliefs, desires, and propositions. People who talk about intentionality do not usually talk about qualia in the same breath, and vice versa. I believe that this is a mistake.

My Phenomenal Twin

Recall that my zombie twin is an exact physical (and presumably cognitive) duplicate of me, but without any subjective phenomenal experience. It walks and talks like me, and for the same neurological reasons, but is blank inside. There is nothing it is like for it to see red. Horgan and Tienson (2002) suggest an interesting thought experiment that turns the zombie thought experiment on its head.

Imagine that I have a twin whose phenomenal experiencings (i.e. qualia) are identical to mine throughout both of our whole lives, but who is physically different, and in different circumstances (perhaps an alien life form, plugged into the Matrix, or having some kind of hallucination, or a proverbial brain in a vat). The question that screams out at me, given this scenario, but that Horgan and Tienson do not seem to ask (at least not in so many words) is this: to what extent could my phenomenal twin's cognitive life differ from my own? If the what-it-is-like to be it is, at each instant, identical to the what-it-is-like to be me, is it possible that it could have any thoughts, beliefs, or desires that were different from mine? Now, we may quibble over defining such things in terms of the external reality to which they "refer" (whatever that means), and decide on this basis that my phenomenal twin's thoughts are different than the corresponding thoughts in my mind, but this is sidestepping the really interesting question. Keeping the discussion confined to what is going on in our minds (that is, my mind and that of my phenomenal twin), is there any room at all for its cognition to be any different from mine? Charles Siewert (2011) makes similar points in his discussion of what he calls totally semantically clueless phenomenal duplicates.

Think of a cognitive task, as qualia-free as you can. Calculate, roughly, the velocity, in miles (or kilometers) per hour, of the earth as it travels through space around the sun. Okay. Now remember doing that. Besides the answer you calculated, how do you know you performed the calculation? You remember performing it. How do you know you remember performing it? Specifically, what was it like to perform it? There is an answer to that question, isn't there? You do not automatically clatter through your daily cognitive chores, with conclusions and decisions, facts and plans spewing forth from some black box while your experiential mind sees red and feels pain, and never the twain shall meet. You are aware, consciously, experientially, of your cognition. But what exactly is the qualitative nature of having an idea?

David Chalmers has asked whether you can experience a square without experiencing the individual lines which make it up. This question nicely underscores the blurriness of the distinction between qualia in the seeing red sense, and cognition in the symbolic processing sense. When you see a square, there is an immediate and unique sense of squareness in your mind which goes beyond your knowing about squares and your knowledge that the shape before you is an example of one. What is it like to see a circle? How about the famous Necker cube? When it flips for you, to what extent is that a qualitative event, and to what extent is it cognitive?

It's not an illusion, really. There is no sense when the cube flips for you that anything in your field of vision changed. Even a child can see with complete confidence that the lines on the paper did not change at all, but something in the mind did. This is in contrast to illusions in which, say, a straight line appears bent, and you are actually deceived. So the "raw experience" of black lines on white paper did not change, and there is not even a subjective sense that they did, but something changed. The thing that changed has something about it that seems easy-problemish, in that it is about a cognitive inference, and interpretation of the actual lines. Nevertheless, it is visceral, and immediately manifest, as much as the redness of red. Is it not clear that your "cognitive" interpretation of the cube (i.e. whether it sticks out down to the left or up to the right) has its own qualitative essence that outruns the simple pattern of black lines that you actually see? You might say that there is cognitive penetration of our experiences, but you could just as accurately say there is experiential penetration of our cognitive inferences. The classic duck/rabbit image is similar. You can't merely see; you always see as. What is it like to see the word "cat"? Wouldn't your what-it-is-likeness be different if you couldn't read English, or any language that used the Roman alphabet? Your cognitive parsing of your visual field is inseparable from the phenomenology of vision.

The Qualia Of Thought

What is it like to have a train of thought at all? How do you know you think? What is it like to prove a theorem? What is it like to compose a poem? In particular, how do you know you have done so? Do you see it written in your head? If so, in what font? Do you hear it spoken? If so, in whose voice? You may be able to answer the font/voice questions, but only upon reflection. When pressed, you come up with an answer, but up to that point you simply perceived the poem in some terms whose qualitative aspects do not fit into the ordinary seeing/hearing categories.

There is a school of thought that holds that qualia are exclusively sensory - that any "qualia of thought" are qualitative only by inheritance. That is, we actually "hear" our thoughts in a particular auditory voice, or see things in our minds' eye. This is a stretch, and puts the cart before the horse. I, for one, don't think in anyone's voice. Moreover, any qualia of thought is not just tagging along in the form of certain charged emotional states that accompany certain kinds of thoughts. The qualia is right there, baked into the thoughts themselves, as such. "Purely" "cognitive" "content" is itself qualitative, not just the font it is written in, or the voice it assumes when it is spoken, or the hope or the fear that we attach to it.

Anything we experience directly, whether it is the kind of thing we usually associate with sensation and emotion or with dry reasoning and remembering, is qualitative: a song, a building, a memory, or a friend. By definition, all I ever experience is qualia. Even when I recall the driest, most seemingly qualia-free fact, there is still a palpable what-it-is-like to do so. To the extent that our cognition is manifest before us in the mind in the form of something grasped all at once, whether in the form of something which is obviously perceptual or something more abstract, it is qualitative. Otherwise, how would we be as directly aware of our thoughts as we are? How do you know you are thinking if you in no way express your thought physically (writing or speaking it)? A thought in your mind is simply, ineffably, manifestly before you, as a unitary whole, the object of experience as much as a red tomato is.

That we are aware of our thoughts at all in the way we are is no less spooky and mysterious than our seeing red. If you were a philosopher who was blind since birth, the "what is it like to see red?" argument for the existence of qualia would not have the same impact that it does on a sighted person. If you were also deaf, neither would "what is it like to hear middle C on a piano?". If you were an amnesiac in a sensory deprivation tank, would you have any reason to worry about these mysterious qualia that philosophers think about so much? You would, simply by virtue of noticing that you had a train of thought at all.

"What is it like to see red" or "what is it like to hear middle C on a piano" vividly illustrate the point of the Hard Problem to someone approaching these topics for the first time, but it is a mistake to stop at the redness of red. The redness of red is the gateway drug. Just because the existence of qualia is most starkly highlighted by giving examples that are non-structured and purely sensory, it is a mistake to think that the mystery they point to is confined to the non-structured and purely sensory. Even qualophiles often make this mistake, however. The paradigmatic examples of qualia are good for convincing people that we don't yet have a solid basis for understanding everything that goes on in our heads. It is tempting, however, to think that we are at least on our way to having a basis for understanding what is going on in our heads when we think. My point is we don't have a good basis for understanding that either.

Just as qualia are not just the alphabet in which we write our thoughts, neither are they merely the raw material that is fed into our cognitive machinery by our senses. The qualia are still there in the experience as a whole after it has been parsed, interpreted and filtered. Qualia run all the way down to the bottom of my mental processing, but all the way up to the top as well. We are not, to steal an image from David Chalmers, a cognitive machine bolted onto a qualitative base. Nor, as Daniel Dennett says (derisively), is qualitative consciousness a "magic spray" applied to the surface of otherwise "purely" cognitive thought. Each moment of consciousness is its own unique quale; new qualia are constantly being generated in our minds.

There are qualitative sensations that accompany particular, cognitively complex situations, but which are nevertheless no more reducible to "mere" information processing than seeing red is. Once, looking down from the rim of the Grand Canyon, I saw a hawk far below me but still quite high above the canyon floor, soaring in large, lazy circles. I was hit with a visceral sense of sheer volume - there is no other way to describe it. I felt the size of that canyon in three dimensions, or at least I had the distinct sense of feeling it, which for our purposes is the same thing. This was definitely something I felt, above and beyond my cognitively perceiving and comprehending intellectually the scene before me. At the same time, the feeling is one that is not a byproduct or reshuffling of sense data. After all, as a single human being I only occupy a certain small amount of space, and can have no direct sensual experience of a volume of space on the order of that of the Grand Canyon. Had I not experienced this feeling, I still would have seen the canyon and the hawk, and described both to friends back home. The feeling is ineffable - there is no way to convey it other than to get you to imagine the same scene and hope that the image in your mind engenders the same sensation in you that the actual scene did in me.

Nevertheless, the feeling that the scene engendered in me only happened because of my parsing the scene cognitively, interpreting the visual sensations that my retinas received, and understanding what I was looking at as I gazed out over the safety railing. The overall qualitative tone of a given situation depends crucially on our cognitive, symbolic interpretation of what is going on in that situation. Further, the individual elements of a scene before us have qualia of their own apart from the quale of the whole scene (e.g. there may be a red apple on a table in a room before us - it has the "red" quale, even though it is part of and contributes to the overall quale we are experiencing of the entire room at that particular moment).

There are some qualia, moreover, that are inherently inseparable from their "cognitive" interpretation, experiential phenomena that are especially resistant to attempts to divide them into pure seeing and seeing-as. In particular, as V. S. Ramachandran and Diane Rogers-Ramachandran pointed out (2009), we have stereo vision. When we look at objects near us with both eyes, we see depth. This is especially vivid when the phenomenon shows up where we don't expect it, as with View Masters, or lenticular photos (those images with the plastic ridges on them that are sometimes sold as bookmarks, or come free inside Cracker Jack boxes), or 3D movies. This effect is, to my satisfaction, unquestionably a quale. It is visceral. It is basic. You could not explain it to someone who did not experience it.

At the same time, it is obviously an example of seeing-as, part of your cognitive parsing of a scene before you. One might possibly imagine some creature seeing red without any seeing-as, unable to interpret the redness conceptually in any way, but it is impossible to imagine seeing depth in the 3D way we do without understanding depth, without thereby automatically deriving information from that. To experience depth is to understand depth, and to infer something factual about what you are looking at, to model the scene in some conceptual, cognitively rich way. Stereoscopic vision is our Promontory Point, where the Hard and easy problems collide. It is an entire distinct sense modality, but one that is inextricably bound up in our informational processing of the world.

Naive, Or Pure Experience

What we know informs what we experience. I take it as pretty much self-evident that it is almost impossible to have a "pure" experience, stripped of any concepts we apply to that experience. Everything we experience is saturated with what we know, or think we know, what we expect, what we assume, etc. Some philosophers seem to think that there is such a thing as pure perception, a phenomenal basis of experience, untainted by anything we might call cognition. According to this line of thought, as much as our experiences are shaped by many layers of cognitive processing, there is some kernel of experience that is shared by us, and, say, a raccoon, viewing the same scene (ignoring differences in physiology, eyesight, etc.). I do not believe this is true, or at best, it is a mischaracterization of the situation. What I share with the raccoon is not an experience, but the raw bitmap comprising the pattern of stimulation of the rods and cones on our retinas. This bitmap is fed into a lot of processing machinery, and in that sense contributes to an experience, but itself is far from constituting an experience.

The experience I have, and presumably that which the raccoon has, is made of seeing-as: seeing this blob as a hydrant, that one as a cloud, this splotch as the sun, that one as an object that I could touch, and that will probably persist through time. However experiences happen in minds, they are the result of lots of feedback loops at lots of different levels, all laden with associations and learned inferences, all stuff we might call cognition. There is no such thing as pure seeing, separated out from any seeing-as.

Through an act of willful intelligence, I could decide to concentrate only on those things in a scene before me that begin with the letters M, N, and G. Alternatively, I could choose to pay special attention to those things made of metal. In the same way, through willful, intelligent effort, I can try to distill some "pure experience" from the scene, and come up with something like a raw bitmap, perhaps for the purpose of painting a picture on a canvas of the scene. But even if this effort could possibly ever be 100% successful (I question this - could you really discard your understanding of object permanence?), this is further processing, more cognition, not less. For this reason, it is not quite right to say that I am starting with my cognition-soaked experience and working to get back to the "raw" experience, because that presumes there was originally such a thing to get back to.

For all but perhaps for the most basic qualia (a burn, a tickle, a pin prick), a pure experience, devoid of cognitive processing, is an abstraction, a conceit dreamed up by philosophers that has never actually been observed in the wild. It is a step in exactly the wrong direction, however, to thus conclude that knowledge and concepts can take full responsibility for experience, and that knowledge and concepts are among Chalmers's "easy" problems, solvable within reductive materialism. This step entails discarding qualia altogether, and concluding that experience is cognition all the way down. In contrast, experience is more accurately seen as qualia all the way up. Nevertheless, this is the step, more or less, that Dennett takes, and it is the step that adherents of Higher Order Thought take.

Higher Order Thought (HOT) Theories

Higher Order Thought (HOT) theories, whose most prominent proponent over the years has been David Rosenthal, go more or less like this. We have sensory impressions, as of a red apple, but these are just lower-order thoughts (LOT), and thus not qualitatively conscious. It is only when we have an additional thought about that first, lower-order thought (this additional thought being the higher-order thought or HOT) that the whole thing becomes fully, qualitatively conscious. It is the HOT that assumes the role of "applying concepts" (whatever that means) to the LOT.

The Hard Problem qualia stuff only happens in or by virtue of the HOT, while the LOT does some kind of initial perceiving or cognitive processing. The HOT theorist wants to posit qualitative consciousness as some kind of second order effect, or some kind of "reflection on" (mere) perceptual processing. I've never been entirely clear as to where the consciousness actually happens, in the LOT upon being reflected upon by the HOT, or in the HOT itself. Either way, the theory is not terribly satisfying. Once again, we have magic being snuck into the model under the guise of the never-quite-explained "aboutness" relation.

Let us think like engineers and imagine a module called LOT and another module called HOT. Somehow or other, the LOT talks to the outside world through our nervous system, so that is one big honking bidirectional communications channel. In similar fashion, the HOT must be connected to the LOT with a different bidirectional communications channel. This channel can have any bandwidth we want - the sky's the limit. Note that I have not added anything to the HOT/LOT model that wasn't already implied. HOT and LOT, by hypothesis, are distinct entities, and they surely must communicate - hence the existence of a channel between them. So we have something that looks like this: HOT ↔ LOT ↔ world. Sense impressions come into the LOT from the outside world, over its communications channel with that world. The LOT may do some processing, perhaps modeling the inputs in some stateful way, then some signaling passes between the LOT and the HOT and we get consciousness.

What is so special about the HOT, and the signals that pass over its channel to the LOT that make it uniquely capable of instantiating consciousness, while ruling out in principle, the possibility of any signals passing between the LOT and the external world and achieving the same result? Couldn't we swap out either the LOT or the HOT and replace it with something else, like a dumb tape playback of a prior run of the experiment? In this case, the other module (the one we didn't swap out) would never know the difference, as long as the module we swapped out kept up its end of the conversation over the communications channel. What is it about that conversation, the signaling over the channel, that confers consciousness upon one of the modules on either end of the channel? And why is this any less mysterious than the original Hard Problem, of how consciousness arises from "signaling" over the channel provided by our sensory system? HOT theories don't solve the Hard Problem, they just kick it up a level: the LOT is not conscious based on the bits it exchanges over the wire connecting it to the outside world, but the HOT is conscious based on the bits it exchanges with the LOT. If the HOT theorist prefers to use leading language, saying the HOT is about the LOT, or is targetted the LOT, or is applied to the LOT, maybe that's fine, but they need to explain what they are talking about if they attribute any real powers to this relation that can not, in principle, be reduced to bits on a wire.

HOT theories are motivated by a single intuition - that of the strange second-orderliness of consciousness. We generally distinguish between qualia and knowing things, yet there seems like no space for sunlight between seeing red and knowing that I am seeing red. The cognitive penetration of even the rawest of "raw perception" is a real thing, and we need to think hard about it. There are some deep questions regarding the feedback loop between qualia and cognition, and the way they interact and influence each other on the fly. HOT theories, however, merely restate the question without answering it. The model they propose articulates the introspective intuition that there is a strange interdependence between what we usually think of as the experiential and what we usually think of as the cognitive, formalizing that intuition by drawing a neat block diagram. Articulating the observed intuition, however, does not answer any of the questions it presents.

Worse, HOT theories assume that we can cleanly separate out qualia from cognition, and that this "aboutness" relation between them is straightforward and unproblematic, when these are exactly the assumptions we should be questioning. The really interesting question to ask when presented with the funny interdependence between cognition and qualia is where we ever got the idea that they were completely different kinds of things in the first place.

Naive, Or Pure Cognition

Many philosophers agree that in minds, qualitative consciousness and cognition are closely related, if not two ways of seeing the same thing, but make the mistake of concluding that qualia must therefore be merely information processing, which we think we understand pretty well. "Information" is a terribly impoverished word to describe the stuff we play with in our minds, even though much of what is in our minds may be seen as information, or as carrying information. Shoe-horning mind-stuff into the terms of information theory and information processing is a homomorphism, a lossy projection. There are no easy problems in the easy vs. Hard Problem sense. The way the mind processes information has a lot more in common with the way the mind sees red than it does with the way a computer processes information.

Once again, the computer beguiles us. Of course, we built it in our own image, so it is no surprise that it ends up being an idealized version of our own intuitions of how our minds work. We understand computers down to the molecular level; there are no mysteries at all in computation. And clearly, in some sense at least, computers know things, and they represent things. I can get some software that will allow me to map my entire house on the computer, to facilitate some home improvement projects I have in mind. And lo! my computer represents my couch, and seems to understand a lot about its physical characteristics, and it does so completely mechanically, and we can scrutinize what it is doing to achieve that understanding of it all the way down to the logic gate level. We are thus confident that we know exactly what is going on when we speak of knowledge, representation, information processing, and the like. There is nothing mysterious here, at least in the mechanics of what is going on.

Just because we understand computers, and computers seem to know, think, remember, infer, etc. we should not therefore think that now we understand those things. We do not study cave paintings as clinically accurate diagrams to learn about the human and animal physiology depicted therein. We study them to learn how their ancient creators saw themselves and their world, to get inside their heads. The real insights to be gained into the mind from computers come from considering that this, this particular machine, is how we chose to idealize our own minds.

I can write "frozen peas" on a grocery list, and thereby put (mechanical) ink on (mechanical) paper. Later, when I pull out the list at the store, and it reminds me to put frozen peas in the cart, this physical artifact interacts with photons in a mechanical way. The photons then impinge upon my sensory system, and thus, in turn, my mind. So the paper and ink system represents frozen peas; it knew about them. Of course, most computers we use today are a bit more complex than the paper grocery list, but the essence is the same - there is the same level of knowledge, representation, information processing, etc. going on in each. We can say that in a sense, the list really does know about the frozen peas, but not in a way that necessarily gives us any insight at all into how we know about peas.

There is no pure cognition in the mind, at least none that we are directly aware of. Over a century ago, philosophers did not separate cognition and qualia the way they do now. It was only in the early part of the 20th century, in the ascendance of behaviorism and the advent of Information Theory and Theory of Computation that we Anglophone philosophers started thinking that we are beginning to get a handle on "cognition" even if this qualia stuff still presented some problems. When some thinkers felt forced to acknowledge qualia, they grudgingly pushed cognition over a bit to allow qualia some space next to it in their conception of the mind, so the two could coexist; now they wonder how the two interact. The peaceful coexistence of cognition and qualia is an uneasy truce. Qualia can not be safely quarantined in the "sensation module", feeding informational inputs into some classically cognitive machine. We must radically recast our notions of cognition to allow for the possibility that cognition is qualia is cognition.


Knowledge

Is Knowledge Something Philosophers Ought To Study?

Epistemology is one of the major branches of Western philosophy. It is the study of how we know what we know. It has been studied for about as long as we have had philosophy, which is to say for thousands of years in one form or another. My problem with epistemology, as traditionally done, is that it is infected with crypto-Platonism.

Traditional epistemologists spin theories of knowledge, guided by common colloquial usage of the word "knowledge" and by their man-in-the-street intuitions of what counts as knowledge and what does not. This would be fine if they openly declared that usage and everyday intuitions were the explanandum, such that what they were really after was a theory that explained usage and intuition, but I've never seen anyone come out and make such a declaration. It would also be fine if they openly declared (and argued for!) the claim that our usage of the word "knowledge" and our intuitions about it were true and faithful reflections of some underlying natural kind, and that using this usage and intuition as falsifiability criteria for their theories was the best way to get at the truth about this natural kind, but they never seem to make this declaration or argument either. I find it frustrating that they do not see the need to even nod in this direction.

My accusation of crypto-Platonism stems from my impression that traditional epistemologists do not, in fact, think that they are studying "mere" usage and fallible intuition, but are zeroing in on some Fact Of The Universe. We are left to infer that epistemologists believe that there is some perfect unchanging definition of knowledge but we just haven't found it yet. So they dive in and "study" this stuff called knowledge, and start coming up with "theories" of it, with various definitions and theories being ruled out because they are susceptible to counter-examples that "clearly" don't constitute true knowledge.

This approach is self-defeating in that to the extent that you elevate your initial intuitions to the status of being the final arbiter of the truth of your theory, you can never reach a counter-intuitive conclusion. The earth revolves around the sun, and not the other way around; time and space behave differently for observers in different frames of reference. If you throw away any theories that contradict your initial intuitions, you are pretty much guaranteed not to find any deep but hidden truths. You have resigned yourself to being Ptolemy, not Copernicus.

To get a sense of who I am talking about here, read a little about reliabilism, Gettier problems, or epistemology in general.

Is Knowledge Elemental Like Hydrogen, Or Like Aristotle's Fire?

Is knowledge a natural kind, like hydrogen, a really-there category of stuff in the universe? Is it the sort of thing about which one might have real predictive theories, theories that might turn out to be right or wrong, perhaps in defiance of our initial intuitions? Or is knowledge more like most of the loosey-goosey cluster concepts we use in everyday life? Or is it the case that in everyday conversation we are sloppy and broad in our use of the term, but when we zero in on a real, rigorous definition, we will find that true knowledge actually applies to a subset (or superset) of what we have been calling "knowledge" all this time?

As we investigate and speculate and theorize, even if we do find some natural kind that lies at the core of our notion of knowledge, we may end up finding that this natural kind, in itself, may or may not line up perfectly with our common usage of the term. In this case we may end up redefining the term "knowledge" in a more technical sense, and this new redefined usage may well exclude some things we might ordinarily call knowledge, and include some things we would not ordinarily call knowledge. On the other hand, if there is a natural kind down in there somewhere but it is just too disjoint with our common usage of the term "knowledge", we may just coin a new term and say that our ordinary notion of knowledge is based on this new thing. Moreover, if there simply is no natural kind that undergirds our everyday notion of "knowledge", we might just stop talking about it altogether in philosophical papers.

The situation with knowledge is a lot like the elements of Aristotelian physics. I'm not saying there is no such thing as knowledge as traditionally conceived, any more than I would say that air, water, earth, and fire don't exist. It's just that you can spend centuries calculating the exact ratios of the four "elements" in the various substances you see around you, and you won't have really explained anything in terms of something more fundamental; you won't have carved Nature at the joints. I suspect that our everyday understanding of "knowledge" is more like Aristotle's earth, and less like hydrogen.

So before we spend hundreds of pages arguing about whether or not knowledge is justified true belief, and coming up with counterexamples, we should take a step or three back and figure out what, roughly, the explanandum is. Whenever we do the philosophy of X, we need to review why we think there is any such thing as X in the first place. In this, we can only be guided by our intuitions and common usage (keeping in mind that we declare this up front, and allow intuition and usage to point the way to the phenomenon in question, and do not take them to be any kind of final authority). So we should try to nail down why we think there is any such thing as knowledge, and what makes us think it might be a natural kind, and what intuitions we have about it that we might be willing to sacrifice for the sake of retaining the name "knowledge" to apply to whatever natural kind we end up identifying at the core of our everyday usage of the term.

Do we have any other intuitions about knowledge that might help to constrain the problem? Sometimes, when talking about science, we use the term "know" very broadly, anthropomorphizing: the oceans on Earth know about the moon through its gravitational influence. In this case, to know is to be causally influenced. I think we can safely write this off, at least provisionally, as a case of speaking so metaphorically as to be not "true" knowledge, as we usually understand the term. Although this move can be abused - we should use it sparingly and only in the most extreme cases.

Types of Knowledge

What kinds of things do we generally count as knowledge? What makes us think there is any such thing as knowledge, even as a sloppy, colloquial concept, let alone as a perfect unchanging Platonic Idea? This list is not meant to be exhaustive, nor do I mean to imply that these are fundamental categories with crisp boundaries and no overlap, but here are a few different senses in which we use the term "knowledge".

First, there is knowledge by direct acquaintance. When I see red, I know that I am seeing red. When I am in pain, I know that I am in pain. This gets back to that troublesome second orderliness of qualia. Qualia and knowledge of qualia seem pretty inseparable. This observation alone is pretty much all that constitutes Higher Order Thought (HOT) theories.

Closely related to knowledge by direct acquaintance is knowledge by brute association, like the knowledge that fire is hot. Simply reacting to something based on past experience may count as this sort of knowledge. It could be argued that a dog can know that fire is hot, or that it knows it will be smacked with a rolled up newspaper if it steals the meatloaf off the counter. Moreover, I think that you could plausibly argue that a dog can know that fire is hot, but that it can't have knowledge by direct acquaintance, e.g. I am hot right now. Characterized as it is, functionally, in terms of stimulus/response, knowledge by brute association is a lower bar to clear than possessing the sense of self necessary to know that you are experiencing what you are experiencing. Then again, forming a brute association and acting accordingly might only count as knowledge in a very loose, broad sense. You might be able to think without words, but can you really know without words?

There is also factual, verbally encoded knowledge like Shakespeare quotes or the fact that Moscow is the capital of Russia. Although, as I have argued before, this kind of "dry cognition" is shot through with qualia of its own. Nevertheless, this is the kind of knowledge that rubs up against all kinds of questions about language and the role it plays in our mental lives, since this kind of knowledge seems inherently linguistic. It is digitally encoded, so to speak. Using the power of language, we can know far more complex things than we could ever keep straight using direct acquaintance or brute association alone.

Moving even farther along in terms of abstraction, we have definitional knowledge, which is even more linguistically encoded, and really is about the way we define the terms themselves. The paradigmatic example of this kind of knowledge is that all bachelors are unmarried. That all bachelors are unmarried is a fact, and a fact that I know, but it is not an empirical fact that I found out somehow, but rather it is inherent in how I define my terms in the first place. It is a form of a priori knowledge. I would also include 2 + 2 = 4 in this category. Of course, this opens up all kinds of possibilities about mathematical truths and the extent to which we are discovering anything "out there" when we do math, as opposed to playing with our own minds. Without going down the philosophy of mathematics rabbit hole too far, I think we can say that I am saying something about how I define 2, 4, +, and = when I assert that 2 + 2 = 4. There is no possible universe in which 2 + 2  does not equal 4. If you say there is, then you are defining your terms quite differently than I do, and it is just a matter of you using a different language, at least with regard to these terms (for instance if you decided that "+" means minus, or that "4" means -9).

We also have inferential, or implied knowledge. If all men are mortal and Socrates is a man, then Socrates is mortal. This is related to definitional knowledge, in that the truth to be known here is implied by the way the terms are defined, but instead of automatically falling out as soon as you know what the words mean, you have to put a little more work in to connect the dots. You can be a perfectly competent language user, with a good command of the terms and their meanings and still not grasp all of the implications of the premises that you know. Indeed, lots of the work people do in mathematics is that of working out obscure implications that are implicit in the premises that are well known to every freshman. Logical inference does not come for free. Knowing the premises is not the same thing as knowing the conclusion.

Once we connect those dots, however, maybe implied knowledge is just a kind of factual knowledge, since once we make an inference, it is knowledge, full stop. Moscow is the capital of Russia, Socrates is mortal. Not quite, though. There is a sense of certainty that clings to knowledge that we derive from other knowledge. Also, depending on context, we sometimes attribute knowledge of this sort to people even when they haven't connected the dots explicitly, but might perfectly well be expected to be able to do so: The power went out, so you know you won't be microwaving a frozen burrito in the immediate future, even if you are just starting to get hungry and had not quite articulated to yourself the desire for a microwaved burrito.

Finally, there is knowledge of missing knowledge, gaps that are so narrowly circumscribed that they are defined by its negative space in my mind. This is not really another category of knowledge itself, but a form of knowledge about our own minds, and how knowledge fits into it. It is really meta-knowledge, like my knowledge that I don't know Richard Nixon's birthday or most of the constellations in the night sky.

Bundles Of Counterfactuals

In all of examples above of types of knowledge, what does it mean to know? I know I am seeing red because I am seeing red. I know that fire is hot, because if I reached my hand out into a fire, it would burn me; if I put a piece of metal in a fire, it would get hot, and it would burn me. If I got on a airplane to Moscow, I would end up in the capital of Russia, and all of the assumptions and expectations I would have on the basis of that expectation would be validated. If I act on my inferred knowledge, my expectations will likewise be validated, and if I only had the combination to the keypad lock, I could get out of this room.

If, if, if. In all but the first case (knowledge by direct acquaintance), the word "if" plays a role. When asked to explain or even describe our knowledge, we almost always immediately turn to hypotheticals. In the first case, my direct knowledge of my own conscious state, there is no hypothetical, since the "if" clause is already happening - it is the degenerate trivial case of a hypothetical, in a sense.

I know the cast resin garden Buddha is hard, and I know this with certainty. What does it mean that I know this? I have an immediate, palpable sense that if I were to touch it, if I were to drum my fingernails on it, if I were to rap it with my knuckles, it would feel hard. Some of our expectations regarding these hypotheticals are immediate and sensual, while others are complicated and a little more abstract. I know that I have a certain balance in my checking account because if I tried to buy a roller coaster for my back yard, the debit would be declined.

Is knowledge of something, then, (just) a big bundle of hypothetical expectations? Could it be that we think we have a map, and know things from above, as it were, but really we just have a very elaborate set of directions, situation-specific instructions and chains of if/then clauses that present themselves instantaneously on demand? Does what we think of as descriptive information in our minds end up resolving into a whole lot of prescriptive information with no remainder? And what, in turn, does it mean to have mastered the hypothetical? To know that if…then…?

If/then clauses have a decidedly algorithmic, prescriptive ring. One associates them with computer programs. To resolve them, you run through the cases. You compute. Could it be, in fact, that we do not actually know in the direct sense that we think we do, for instance, that the garden Buddha is hard? We only cognitively judge ourselves to know, and have a very good system for coming up with justifications on demand? If this were true, our "knowledge" of something is really just a warm, fuzzy confidence that we know rather than what we normally think of as true, immediate, internalized knowledge. When does complete, just-in-time predictive power and mastery of the hypotheticals become essence? How do you know that you know, really? Even something as seemingly definitional as 2 + 2 = 4? You feel certain that you grasp the meaning, and its inherent truth, all at once, but this is an appeal to introspective intuition. As a qualophile, I'm all for appeals to introspective intuition, but qualophobes often engage in intuition-shaming.

This take on knowledge is analogous to what Daniel Dennett thinks about qualia. He claims that we don't actually directly experience in the way we think we do, but we (merely) judge ourselves to experience. We actually have a really good mechanism for answering any questions immediately about our field of "experience", and we tell ourselves cognitively that we experience "directly". Could knowledge be that way?

What Is It Like To Know?

No, and for the same reasons that Dennett is wrong about qualia. I can know that the Buddha is hard, and really sense its hypothetical hardness without actually taking the time to run through any of the imaginary scenarios of touching, drumming, rapping. Again, a smeared-out process becomes a single, unitary thing, grasped all-at-once. In our minds, the prescriptive becomes descriptive. Process becomes thing. We see the algorithm from above, without running through the if…then… cases like mice in a maze. For us, if…then… is not a matter of execution paths, but a more holistic, from-above ifthenishness. The counterfactuals are not just our way of expressing or explaining our knowledge, but are right there, baked into the knowledge itself, and into our sense of having that knowledge. There is a what-it-is-like to know the Buddha statue is hard. I know the Buddha statue is hard with the same sort of certainty that I know that it is hard when I am actually stubbing my toe on it. I am directly acquainted with my knowledge of its hardness. The functional, prescriptive construal of knowledge sounds plausible in the abstract, but fails on subjective, qualitative grounds, just as a purely functional construal of qualia does. There is such a thing as really-there (as opposed to may-be-seen-as) descriptive information, if only in our minds.

In fact, (and if you have been following along you probably saw this coming), I'll take it to its next logical step: knowledge is a quale. Like a lot of qualia, it is a complex all-at-once kind of quale. Interestingly, it is also a Lego-stackable quale, in that it constrains or modifies, or calls into being, other qualia. Knowledge applies itself on the fly as the situation calls for it, or seems to present an opening for such application, and incorporates all those implied hypothetical scenarios instantaneously in some way, so that they don't actually have to play out through time in your mind. The ways in which a piece of knowledge can construct or constrain other thoughts you might (or might not) come up with is an inherent part of the knowledge itself. Pieces of knowledge seem to insert themselves and stack and self-organize as appropriate. They are active, and interactive.

The troublesome second-orderliness of knowledge mirrors that of qualia: seeing red seems inseparable from knowing that you see red, just as knowing that Moscow is the capital of Russia seems inseparable from knowing that you know that Moscow is the capital of Russia. If knowledge-qualia constrains other qualia, and aggressively seeks to apply itself, it makes sense that it would insert itself in any declarations of state by whatever our self-model might be, and we would know that we know, and know that we know that we know, etc. As with implied knowledge, we don't have to drag the whole derivation of the knowledge into play when we know that we know something; you can know that you know your own phone number without reciting it mentally.

Treating knowledge as a particular kind of qualia, or a way of thinking about qualia, I think, is the most promising approach in terms of zeroing in on knowledge as a natural kind. This is consistent with my general internalism about things like language, meaning, and reference. What is philosophically interesting, and really distinct about these things is not the complex of physical interactions out there, but whatever is happening in here. If you choose to define your terms in such a way as to be an externalist about such things, I can't stop you, and you may be able to come up with a self-consistent system for talking in the terms you define, but you won't have carved Nature at the joints.

The take-home message here is that qualia are not just some magic spray that coats our otherwise functional machinery, or some kind of mood that washes over our minds. Qualia are what our minds are made of, the girders and pistons as well as the paint. The big question, then, is how should we think of this holistic, process-as-thing, data-as-algorithm, all-at-once, seen-from-above stuff, that we see in our own minds but nowhere else in nature? Thinking in terms of computation will only take us so far, and at a certain point, thinking in computational terms will actually mislead us. We need a new model, a new way of thinking about this.


One man's algorithm is another man's data.

Doesn't It All Just Come Down To Information?

What Even Is Information, Anyway?

"Information" is one of the great buzz words of the last several generations. The term has been in use in the English language for centuries, but it started to be used in its present technical sense in 1948, when a brilliant communications engineer working for the phone company named Claude Shannon published "A Mathematical Theory of Communication", ushering in the field of inquiry now known as Information Theory. He formalized the use of the term, and made it mathematically quantifiable. He thought of information as sequences of bits, or ones and zeros.

Claude Shannon was not a philosopher, he was an engineer. He mathematicized information so that he could calculate, for example, that a communications channel capable of transmitting X bits per second with an error rate of up to Y bits per 1000 could be used to transmit Z bits per second error-free (where Z is somewhat smaller than X), given some sort of transformation of the information on either end of the communications channel. He was concerned with noise on the wire. He was concerned with characterizing the "information density" in a given stream of bits so that by compressing the stream (i.e. increasing the information density) one could effectively transmit the same amount of information using fewer bits and therefore less bandwidth on the communications channel, thereby saving the phone company money.

Essentially, Shannon was interested in very practical, meat-and-potatoes sorts of questions. Others, however, have not been so conservative. Information theory has inspired many philosophers to make extravagant claims, and information has become one of the most popular bases in the reductionist's toolkit. That is, just about everything at one time or another has been argued to be really just information, or information processing.

Of particular interest here, of course, are minds and consciousness. Indeed, the entire cognitive science program is predicated on the notion that the brain is (just) a complicated information processor - that not only can it be seen in terms of information processing, but that seeing it in these terms captures what is interesting about the brain in its entirety. A consequence is that any similarly configured information processor of equal capacity would manifest a mind in every sense that the brain itself manifests one. These sound like large and sweeping claims, but we can not even know whether they are or not (let alone whether or not they are true) until we nail down exactly what is meant by "information" and "information processing".

In what sense does an information processor actually process information? How does it manipulate symbols? In spite of the well developed field of information theory, it is devilishly hard to find anyone who commits an actual definition of the term "information" to print. While qualophiles may not have answered the corresponding questions for the term "qualia", they acknowledge at least that there is work to be done along these lines. People on both sides of the Hard Problem debate, however, too easily assume that we know what we are talking about when we speak of information and information processing. Information is more difficult to pin down than is generally accepted, and there are very different things that are meant by the term depending on the context.

"Information" is a perfectly fine English word, and it has been in use for a long time. For all I know, Shakespeare may have used it. Everyone has a rough and ready, colloquial sense of what it means, and they use the word to communicate with each other every day. It also has this highly technical, bits-and-bytes-on-a-wire meaning. Mischief and confusion result from this mismatch, so if we are going to define anything in terms of information, we better be clear about what we mean, or at least we should have some distinct lines we can draw between information and not-information.

Information Is A Platonic Abstraction

There are molecules of ink on a page made of more molecules; there are perturbations in a physical electrical field on a metal wire; there are photons of light which propagate through an optic fiber. When I look inside a computer, I see voltage levels, and diodes which behave differently when subjected to different voltage levels. All of these things (or collections or patterns thereof) may be seen as information, but the key phrase there is "may be seen".

Information theory is a branch of mathematics, and bits (0s and 1s), like lines and points in Euclidean geometry, don't really exist, at least not out there in the real world. They are Platonic abstractions. We may profitably see things that are really there (like voltage levels) as information, and make generalizations, and hence predictions about those voltage levels based on our analysis, but the specific predictions we come up with will never be anything that we could not have, in principle have derived from a sufficiently detailed knowledge of the physical system alone without reference to any notion of "information".

Information is an abstraction, and abstractions, to a physicalist, must be cashed out in terms of the nuts and bolts that make up the actual physical universe. In practice, it may be very difficult for us to make useful predictions about an information processing system at the level of raw physics, but the universe itself has all it needs to clank along, one moment to the next, without our notions or theories of "information". Put differently, once God had established all the physical facts of the universe (i.e. the physical laws and initial conditions) He did not have to do any additional work to determine the facts about information processing. Everything the universe needed with regard to information was already baked in.

Information is always carried, or manifested, by something else. More pointedly, information always just is something else. By itself, information doesn't do anything. There is always something else doing the work, and that something else would do that work whether or not we think of it as informational. It is not merely the case that the information needs a substrate to instantiate it - the information just is the physical substrate, just as heat just is the mean kinetic energy of a collection of molecules. A system may be seen as informational, and we may thereby derive interesting and important conclusions, but these conclusions will themselves be may-be-seen-as conclusions, couched in terms of the abstractions of information theory.

But when people invoke the term "information" to describe some physical stuff interacting with other physical stuff, they are not usually talking about the stuff itself as such. Information is necessarily abstract. It is not the voltage levels or the ink, but the pattern of voltage levels or ink. As Rosenberg has pointed out (1998), the informational content of anything, whether ink on a page or electrical impulses on a wire, is a bare schema, or a pattern of bare differences. That is to say, the differences by virtue of which something is considered to be information are differences that are circularly defined in terms of each other. What is 0? It is not 1. What is 1? It is not 0. And this is all you ever need to know, all there is to know, about 0 and 1.

0 and 1 can be manifested, or carried, by any medium capable of assuming two distinguishable states (voltage levels on a wire, water pressures in a hydraulic system, wavelengths of light on an optic fiber). This substrate must have a nature of its own that outruns the simple criterion of distinguishability of states necessary to carry, represent, or manifest the abstract 0s and 1s of the purported information itself. One of information's distinguishing characteristics is that it is independent of its particular carrier. Information is arbitrarily transposable, or, to use a popular term, it is multiply realizable.

As I (and others) have argued, qualia are not arbitrarily transposable. Qualia are not themselves information, although they can carry information. Qualia are not a pattern of anything else, but the stuff of which patterns can be made, the substrate whose nature outruns the criterion of (mere) distinguishability. Redness is a qualitative essence and can not survive any transformation or translation into anything but redness. Some information could turn out to be conveyed by qualia, but qualia can't ever turn out to be (just) information.

Information Represents

Understanding, then, that when we speak of "information", we are not speaking about something real in itself, but rather as some good old physical thing that may be seen as carrying or manifesting information, what makes some physical things count as information and others not? It might be tempting at this point to turn from the strictly syntactic notions of information theory to a more semantic characterization of information. We might say that information represents something.

If we go there, however, we have left Claude Shannon behind. He and the phone company don't care what bits represent, or whether they represent anything at all. We are no longer in the quantitative, technical realm of bits, bytes, formulas, and information theory, and we have entered the squishier world of connotation, context, and intuition.

What does it mean for information to represent (without circular reference to information)? What do we mean when we use the term "represent"? What is the core intuition or experience that leads us to use the term the way we do? Does the light from distant stars, striking an earthly telescope, constitute information that represents the stars? Do all effects represent their causes, simply by virtue of the fact that someone might potentially be able to infer the cause (or something about the cause) just by observing the effect? Leaving out the loaded terminology of "someone inferring", representation might be even broader, just any cause of any effect anywhere.

We might then start by saying that thing1 represents thing2 if thing1 is caused by thing2, or if thing1 varies in regular, lawlike ways as a function of variations in thing2. Some people have said with a straight face that information is "a difference that makes a difference." But this is too broad to be any use at all. Since from the time of the Big Bang, each particle in the universe has some influence on every other particle (from the non-zero gravitational influence that any two objects of non-zero mass exert upon each other, if no other), everything is caught up in the causal mesh - everything behaves just the way it does as a function of everything else (at least, everything else it its "light cone", if you want to be physically accurate). If information is anything which is caused by other things in lawlike, regular ways, then everything is information. In fact, everything is information about everything else. If everything is information about everything, then the term is nearly useless, and should be replaced, in philosophical debates, with the more honest term "stuff". And "information processing" could be reasonably be replaced with the term "stuff doing stuff because of the influence of other stuff".

Representations And The Mind

Representation has been a hot topic in analytical philosophy for some time now. As with the idea that information represents, a lot of people over the last century or so have tried to pin down what makes minds special or interesting in terms of the mind's ability to represent: (The HOT is about the LOT, therefore consciousness. We are self-representing in some suitably integrated way, therefore consciousness. We are embodied representational systems, therefore consciousness…) They have often tried to characterize the sorts of systems that produce and use representations or models. (Of particular interest are systems that have a model of themselves.) There is even a school of thought called representationalism that holds, roughly, that conscious states are conscious only insofar as they are representational states. That is, in order be conscious, a mental state must represent.

This is exactly backwards. It makes the representing primary, and the consciousness a secondary effect of the representing. But representation, construed in a reductionist/physicalist framework, is a may-be-seen-as kind of phenomenon. Lots of things in the physical world may be seen as representing other things, but there is no inherent, principled sense in which any of those things definitely does represent another thing in any way that Nature is bound to respect or behave differently because of. Consciousness is a really-there kind of phenomenon, and may-be-seen-as phenomena just won't do as explanations of really-there phenomena.

Back in the old days, I had an answering machine on my home telephone. When I didn't pick up the phone, it told whoever was calling that I wasn't home right now. It was a classic, purely causal, beer-can-falling-off-a-fence-post physical system. To what extent was it really, truly, representing me as not being home right now? How much more internal state would it have to have, "modeling" the world in some special way, perhaps processing this model in an "integrated" way, before we would say that yes, it really was representing me as not being home, in any way that was relevant to these discussions? Piling on more beer cans buys you nothing, absolutely nothing, in terms of really representing, if there is such a thing.

If you found an alien artifact, could you, by reverse engineering alone, determine with certainty that it did or did not contain a model or representation? Is there a fact of the matter? In general, it is hard to find a proponent of theories that crucially involve models and representations who explains exactly what makes one thing a model of something else. Like the concept of information, the concept of "representation" is often left frustratingly vague and abstract by the people who use it as a reductive base for their theories. Some of the intuitions that lead people to ascribe such power to representation melt away if we examine the notion a little more closely. Specifically, I'd like to take a look at the distinction between a descriptive model or representation, and a prescriptive algorithm.

Descriptive vs. Prescriptive Information

A great deal is made of the fact that information represents, but this descriptive, representative sense is only half of the informational story. There is a whole other aspect of information that plays a huge role in our lives and in our theories. Information comes in two flavors: 1) prescriptive ("pick that up.") and 2) descriptive ("the museum is open today"). The opcodes that comprise a computer program at the lowest level are prescriptive information (they tell the CPU what to do during a given tick of the computer's internal clock), whereas the data upon which the program operates (whether that data comes from in the computer's memory or from outside, through an input device) constitutes descriptive information. Descriptive information represents (or misrepresents) something, while prescriptive information tells you to do something. If a fragment of a computer program says, "If x is greater than 43, open the pod bay doors", the fragment itself is prescriptive, while the number being examined, the x, is descriptive data. Those opcodes are purely causal, and themselves comprise absolutely everything a computer ever does. Their prescriptive nature is as blunt as that of a baseball hitting an antique vase. They just do.

In everyday conversation, we tend to think of information as primarily descriptive: it sits there, and you hold it before you and regard it: "Oh, so Bismarck is the capital of North Dakota. How interesting." But algorithms are information too ("Go three blocks, turn left at the light, pull into the Krispy Kreme drive-through and order a dozen hot glazed doughnuts."). As far as information theory is concerned, Shannon's laws, etc. don't care at all whether the information is taken as descriptive or prescriptive by the eventual receiver of the information. Any string of 0's and 1's has the same bandwidth requirements on the wire and is quantified exactly the same way whether regarded as descriptive or prescriptive, as data or algorithm.

If you find a computer file full of binary data, and you have no way of telling what the data was used for, you can not tell whether the file constitutes descriptive or prescriptive information. There is no fact of the matter, either, if you just consider the computer's disk itself as a physical or even an informational artifact. It's just a bunch of 1s and 0s. For you to make the prescriptive/descriptive distinction, you must know what the file was intended for, and in particular, you must know a lot about the system that was supposed to read it and make use of it. Only by taking the receiver of the information into account, and looking closely at how it processes the information, can we determine whether the file constitutes data or algorithm. Does the receiving system open the file and treat it as salary records, or does it load up the file and run it as a program? Indeed, one system could treat it as a program, and another could treat it as data, compressing it perhaps, and sending it as an attachment in an email message. The choice of whether a given piece of information is prescriptive or descriptive depends on how you look at it.

Example Using Boolean AND

Consider the AND gate. An AND gate is a very simple piece of circuitry in a computer, one of a computer's most basic logic components. It is a device that takes two bits in and produces one bit as output. In particular, it produces a 0 if either (or both) of its input bits is 0, and produces a 1 if and only if both input bits are 1. That is to say, it produces a 1 as output if and only if input1 AND input2 are 1. Note that the operation of the AND gate is symmetrical: it does not treat one input bit as different from the other: 1 AND 0 gives the same result (0) as 0 AND 1. Another way of saying this is that the AND operation obeys the commutative law. The operation of the AND gate is summarized in the following truth table:

input1input2input1 AND input2
000
010
100
111

But now let's arbitrarily designate input1 as the "control" bit and input2 as the "data" input. Note that when we "enable" the control input (i.e. we make it 1) the output of the whole AND gate is whatever the data input is. That is, as long as the control input is 1, the data input gets passed through the gate unchanged, and the AND gate is effectively transparent. If the data input is 0, then the AND gate produces a 0. If the data input is a 1, then the AND gate produces a 1.

When we "disable" the control input however, (i.e. we make it 0), the output of the whole AND gate is always 0, no matter what the data input is. By holding the control input 0, we turn off the transmission of the data bit. So the control input gets to decide whether to block the data input or let it though untouched. It is the gatekeeper. But (and here is the punchline) because of the symmetry of the AND gate, our choice of which input (input1 or input2) is the "control" and which is the "data" was completely arbitrary! The decision of which input is the prescriptive input telling the gate what to do with the descriptive input is purely a matter of perspective.

The prescriptive/descriptive distinction has interesting implications for those who take issue with Jackson's black-and-white Mary thought experiment by claiming that upon being released from the black and white room, Mary does not acquire any new knowledge, but rather she gains a new ability. What's the difference? She adds to the store of information in her head. She either adds to her repertoire of descriptive information (knowledge), or she adds to her repertoire of prescriptive, algorithmic information (ability). To claim that any great arguments or counterarguments about consciousness depend on it being one way or the other presupposes a real hard and fast, really-there distinction between the two, as well as our ability to tell the difference. At least from a materialist point of view, both are lacking.

Information Pokes, Pushes, or Nudges

Strictly speaking there is no such thing as representative, descriptive information - all information is ultimately prescriptive. Insofar as information has any effect on a receiver or information processor at all (that is, insofar as it is informative), it makes the processor do something. The data in an MP3 is an algorithm that commands a machine to construct sound waves that make up the music.

Think of a given piece of the information as a physical thing, say a tiny area on the surface of a computer disk that is magnetized one way or another way, indicating a 0 or a 1. If this area is to constitute information at all, it must be causally efficacious. That is, something else must do something, or not do something, or do something differently, because of the particular way that area is magnetized. For the magnetized area on the surface of the disk to be informative at all, it must make something else do something, just as a rock I throw makes a beer can fall off a fence post. This sounds pretty prescriptive. Nothing happens by virtue of information simply being itself. At some physical level, it always comes down to the information (or more precisely, the information's physical carrier or substrate) pushing something else around, forcing a change on some other physical thing. Moreover, any physical system that forced the same kind of state change on the part of the receiver would thereby constitute the exact same information as far as that receiver was concerned.

A computer does what it does because of an algorithm, or a program in its memory. This algorithm is prescriptive information. It consists of a series of commands (opcodes), and the computer does whatever the currently loaded command tells it to do. The computer itself (or its CPU) comprises the context in which the individual commands have meaning, or rather the background dispositions which determine what each command will make the computer do. The data that the algorithm processes may be considered descriptive information, but to the extent that the computer's internal state changes on the basis of the data it is processing, hasn't the data dictated the machine's state, and thus its behavior? "If x is greater than 43, open the pod bay doors": isn't x here an opcode, whose value tells the computer to open the pod bay doors or not? The "data" is either not there for you at all, or it makes you do something. It is the cue ball: it knocks into other balls and sets them on an inevitable course of motion. All data are opcodes.

The prescriptive aspect of the supposedly descriptive data in a computer is obscured by the fact that the data lacks a clear, stable context in which its effects are felt, whereas the same CPU tends to do the same thing each time when given the same opcode. The effects of different data are highly dependent on the current state of the machine. Nevertheless, after the data is read, the machine's state is different because of the specific value of the data, and the machine will behave differently as a result. The machine acts differently because of this data, just as it acts differently on the basis of different opcodes in its algorithm. There is no principled natural distinction between the information that comprises the algorithm and that which comprises the "data" on which the "algorithm" operates.

All Models Are Algorithms

There are theories of consciousness that regard consciousness as a product of the interaction of a system with an internal model within itself. What sort of additional information does an internal model provide the larger system that it could not have derived on its own (given the external stimuli), and how does this additional information confer consciousness?

It seems that if we have a system that contains an internal model, we could optimize it a bit, and integrate the model a little more tightly into the rest of the system. Then maybe we could optimize a little more, and integrate a little more, all the while without losing any functionality. How would you know, looking at such a system, if it just didn't have an internal model anymore, or it did but its model was distributed throughout in such a way that it was impossible to disentangle it from the rest of the system? In the latter case, what power did the notion of the internal model ever have? The problems with thinking that there is something special about self-models are similar to those that plague HOT theories: once you separate out some aspect or module as special to the system as a whole (whether you call that thing a self-model or a higher order thought) the specialness really comes from the communications channel between that module and the rest of the system, and we are right back where we started.

Internal Models As Black Boxes

Let us assume a conscious system that has a distinct model (either a model of itself, or a model of the world, or a model of the world including itself - whatever kind of model deemed necessary to confer consciousness). In good functionalist fashion, let us denote this in our schematic diagram of the whole system with a black box labeled "model". You ask it questions, and it gives you answers. Between the "model" box and the rest of the system is a bidirectional communication channel or interface of some kind. This kind of thing is often denoted in schematic diagrams as a fat double-ended arrow (like this: ⇔) connecting the "model" box and the box or boxes representing the rest of the system. Think of it as a cable, perhaps a very fat cable, capable of carrying as much information as you like. Let us call this interface, the cable itself and the conventions we adopt for communicating over it, the API (for Application Programming Interface, a term borrowed from computers). This API may be quite complex, perhaps astronomically so, but in principle all communication between the rest of the system and the "model" box can be characterized and specified: the kinds of queries the rest of the system asks the model and the kinds of responses the model gives, and the updates from external stimuli that get fed into the model.

People who believe in these sorts of theories generally claim that the rest of the system is conscious, not the model itself. Because, by hypothesis, all communication between the (purportedly conscious) rest of the system and the model takes place over the API, the consciousness of the rest of the system comes about by virtue of the particular sequence of signals that travel over the API. As long as the model faithfully keeps up its end of the conversation that takes place over the API, the (conscious) rest of the system does not know, can not know, and does not care, how the model is implemented. It is irrelevant to the rest of the system as a whole what language the model is written in, what kinds of data structures it uses, whether it is purely algorithmic with no data structures at all except for a single state variable, or even purely table-driven in a manner similar to Ned Block's Turing Test beater. It could well be completely canned, the computational equivalent of a prerecorded conversation played back. As far as the rest of the system is concerned, the model is a black box with an interface. Let us just think of it then, as an algorithm, a running program.

Once you separate the model from the rest of the system conceptually, you necessarily render it possible (in principle) to specify the interface (API) between the rest of the system and the model. And once you do that, there is nothing, absolutely nothing, that can happen in the rest of the system by virtue of anything happening in the model that does not manifest itself in the form of an explicit signal sent over the API. Anything that properly implements the model's side of the conversation over the API is exactly as good as anything else that does so as far as any property or process in the rest of the system is concerned. All that makes the model a model is the adherence to the specification of the API. The model is free, then, to deviate quite a bit from anything we might intuitively regard as a "model" of anything as long as it keeps up its side of the conversation, with absolutely no possible effect on the state of the rest of the system.

As any model-based system can be fairly characterized in this way, I have a hard time seeing what intuitive pull this class of theories has for its fans. Remember, what we are looking for is something along the lines of "blah blah blah, the model gets updated, blah blah blah, and therefore red looks red to us in exactly the way that it does." What magic signal or sequence of signals travels over that API to make the system as a whole conscious?

In information systems as traditionally conceived, there are no models, no representations, no data. It is all algorithm. As engineers, we may find it useful to draw a line with a purple crayon and call the stuff on the left side "data" and the stuff on the right side "algorithm" or "processor", but this is not a principled distinction. It is ad hoc, a may-be-seen-as distinction. Any theories of mind that depend on certain kinds of "models" or "representations" being operative then degenerate back into strict functionalism, since the models they speak of turn out to be just more algorithm, just as if they were utility subroutines.

Self Reference

If all information is, at heart, prescriptive, then what becomes of reference, or self-reference in particular? Lots of thinkers have been very interested in self-reference for the last century or so, but what is so special about it? If information is prescriptive or algorithmic, then all supposed cases of referential loops turn out to be causal loops like the earth revolving around the sun, or the short computer program "start: do some stuff; go back to start". A computer routine that is recursive is one that calls itself, like the factorial calculator. Recall that, for instance, 5 factorial (written 5!) is 5 × 4 × 3 × 2 × 1, or 120. The computer program to calculate that looks something like this:


factorial(input)   # Assume 'input' is a natural number!
{
    if (input is 1) then return 1
    else return (input * factorial(input - 1))
}

When called and handed a particular number as an input parameter, this routine calls itself with the next lower number, which also calls itself with the next lower number, then finally when the number reaches 1, it returns a 1, and the whole thing unwinds. This routine, then, is self-referential. But as far as the computer running it is concerned, there is nothing special or mind-bending about it. It neither knows nor cares that it is calling itself rather than a long series of separate routines. At each call, it just adjusts its Program Counter register to go wherever it is told to go, pushing some stuff on the stack. One hundred different routines, or one hundred calls of the same routine, it makes no difference to the computer. In this, the computer is right.

The Algorithmic Intuition

Where does the intuitive appeal of philosophies like representationalism come from? Part of it, I think, is the idea that the system, the processor or algorithm, can respond dynamically to the representation, the data. We have a sense that the algorithm has a certain identity, and that to the extent that it opens the door and invites data in to manipulate its own internal state, it does so under its own control. This intuition loses some of its strength when you fold the "data" into the algorithm, however. If you take the data upon which the algorithm is presumed to operate dynamically and declare it to be just part of the whole algorithm, the algorithm doesn't seem quite so dynamic anymore.

Algorithms are deterministic. Or rather, their physical manifestations are exhaustively described by the laws of classical physics. They barrel along on steel rails of causality. If you look closely enough at them, there are no options open to them, no choices whatsoever. If I knock a beer can off a fence post with a rock, it falls to the ground. There is no way even of saying that an algorithm runs correctly or incorrectly. There is no sense in saying that an algorithm is true or false. It neither represents nor does it misrepresent. It just does. (Or rather, and importantly, whoever or whatever faithfully executes the algorithm, plus the algorithm itself, just does. The algorithm itself just sits there).

The intuition that there is a certain plasticity inherent in algorithms, that they could do other things than what they do is a mirage. If I don't throw the rock, the beer can will stay on the fence post. While it may seem that an algorithm could behave differently given different data to operate on (if x equals 23, the pod bay doors stay closed), it would also behave differently if some of its subroutines were rewritten (if x equals 86, activate the espresso maker). When people speak of algorithms and representations and look to them for the special sauce of consciousness, or anything philosophically big and fundamental, they are projecting intuitions about the mind outward into other stuff. Outside of certain limited technical contexts, the whole idea of the algorithm is an attempt to breathe life into cold dead Shannon information, made of Newtonian physics, to make it jump up and run around, to give it some inherent motive power, while denying motive power to the "data".

The intuition that an algorithm stands aloof, and regards dead data and makes choices based on it but not dictated by it, and that data and algorithm are somehow different, is an anthropomorphism. We project our own subjective, introspective experience outward. It's not wrong or silly! It only becomes silly when, in the process, we try to bleach out any trace of the source material. We do create and use representations. We experience this every moment. We, as conscious minds, have a strong sense of having a separate identity from the simulations of reality we create and tinker with in our heads. We feel that we stand back from our models, regard them, and make decisions based on them. This sense, however, isn't quite as trustworthy as it seems (as William James said, the thoughts are the thinkers) but that is the fantasy image we project onto algorithms and data (or rather, algorithms vs. data).

If physicalists want to deny qualia as fundamental, they should examine information too. They must give up the algorithmic intuition (doing vs. representing): that certain information does stuff and has any choice about what it does, and that certain other information doesn't do anything but is done to. To a physicalist, all information is purely prescriptive, deterministically so. Which is fine, but "information" then becomes either weak or (philosophically) boring, and it becomes pretty hard to say that consciousness all comes down to information (or the processing thereof).

There are descriptions in the universe. They just aren't information, in the strict, Claude Shannon, Information Theory sense. They are qualitative, all-at-once comprehensions. That is to say, information takes on its descriptive, representative aspect only when we create it, step back and take it in all-at-once, when in our minds, it is something other than a series of behavioral dispositions and is, rather, a single thing, a partless whole. This ability of ours, as I have argued, is a unique, spooky mysterious thing minds and only minds do, like seeing red. As with the redness of red, it is hard even to talk about it in precise terms, which is all the more reason to try to talk about it, being honest with ourselves about the limitations of our usual ways of talking.

If we are honest and want to limit ourselves to the reductionistic, technical language of information processing, we may only speak of prescriptive information, otherwise we are speaking loosely, metaphorically, anthropomorphically. The descriptive aspect of information is a qualitative product of minds. Representation is real, a really-there aspect of our universe, and well worth exploring, but this exploration can not even get off the ground unless we regard representation as an aspect of consciousness.


The Reality Between Our Ears

Leaving aside questions about distinguishing between self and percept, as well as questions about qualia, I'd like to step back now and say some folky things about how minds work. I hope this will not be terribly controversial (except at the very end) but will emphasize certain aspects of how the mind deals with the world from a strictly cognitive ("easy problem") point of view. My goal here is not to say anything revolutionary, but to frame what we already know and (I hope) agree on in a certain way that will help us speak more clearly about it, and maybe help us speak more clearly about things like reference, meaning, and language as well.

It is safe to say that as the evolution of our species progressed, our control system (our brain) became more and more sophisticated, eventually developing the ability to construct what we might call, however loosely, an internal model of reality. From infancy onward, I have invested a huge amount of effort building up my own personal model of reality. As a baby, confronted with the booming, buzzing confusion of input from my senses, I began to notice regularities. I pattern-matched, latching onto these regularities, looking for them everywhere. I learned to have expectations based on the past, and I made educated guesses about the future. I hypothesized a reality out there, and some rules by which it operates, and together this whole collection of hypotheses allows me a tremendous amount of predictive power over my environment. This process is likely bootstrapped by a basic instinct we all have to look for patterns aggressively, to create such a reality model, the way a spider has a basic instinct to spin a web. It takes a lot of work to build and maintain this model, and I add to it and modify it every day.

I want to emphasize here just how broadly I am using the term "model". I want to be extremely agnostic about how this model is implemented. Specifically, I do not want to create the impression that I think that it is some neat crystalline edifice made of linked data structures or something, all indexed and self-consistent. Like a lot of things in nature, I suspect that under the hood it is rather messy. It has gaps, and it may contain contradictions. However chaotic the model may be, it does, however, in some sense, work.

When I mention the Titanic (the ship, not the movie) in a conversation with you, perhaps my sense of what that is has all kinds of tendrils of connotation and association, and trivia that you do not know (and vice versa). Nevertheless, the coarse-grained relational dynamics of my model of the Titanic, vis a vis the rest of my reality-model, correspond closely and specifically enough to their counterparts in your reality-model that we can speak about the Titanic with no confusion between us. We depend on this correspondence so completely and so constantly as to not think about it.

All the thinking I do about a thing, like my lawn mower, is done in my head. The lawn mower itself, insofar as I think about it, is in my head. Colloquially, when I talk to you about my lawn mower, we all agree that I'm talking about my lawn mower, stored in the garage. Everything I think I know, believe, or feel about my lawn mower, however, is really in that reality-model between my ears. This seems pedantic to point out, (like the molecules-arranged-in-a-tablewise-manner vs. a table) but it is dangerously easy to forget. In particular, there are a couple of aspects of our reality-model that bear emphasis.

It's Often Wrong

The first is that our reality-model is very often wrong. Or rather, some aspects of the dynamics of the reality-model (the way pieces of it relate to other pieces) do not correspond to dynamics in the external world, and this might lead me to make predictions that would not come true. Every day we do our best, extrapolate, interpolate, generalize, but we always jump to conclusions, and make the best inference we can from available evidence and past experience.

Even at a pretty low level, our immediate perceptions are a best-guess, based on input from the senses. This input is notoriously crappy and gappy. Whatever it is we think we directly perceive, almost all of it is (re)created in our minds (some people evocatively call this a controlled hallucination), and only a sliver of it is actually dictated by raw data from our senses. This works out for us more often than not, but very, very often our guesses and inferences turn out to be wrong.

It's Mostly Holes

The second aspect of our reality-model we should emphasize is that however much we know, there is an incomparably vaster ocean of things we don't know. As we strive to know things, and incorporate certain types of knowledge into our reality-model, we also strive to know what we don't know. I know, for instance, that I do not know the color of the house two down from mine (without looking). I know that I do not know Richard Nixon's birthday. I do know, however, the form such knowledge would take. Its shape is constrained, if not defined, by the outline of its absence. My lack of knowledge, and my knowledge of my lack of knowledge, depends on a ton of what we might call background knowledge, which serves to give it a shape, and hard edges. As such, even this kind of missing knowledge can play a role in my cognition, and contribute to the workings of my reality-model.

Jerry Fodor made a similar point in "The Elm And The Expert". He said that he can't tell a beech tree from an elm. He could easily learn, but he has never bothered to. He knows the distinction is there to be made, however, and he knows other people can tell at a glance, so he is satisfied. He can still talk about elms and beeches, without anyone legitimately accusing him of somehow failing to refer adequately.

In mathematics, there is a structure called a Menger Sponge. Take a cube, and divide it like a Rubik's cube, into 27 smaller cubes, and remove every other one. Now you have a sort of cube with holes, and you have reduced your original cube's total volume by roughly half. Now do the same thing to each of the remaining smaller cubes. Keep going, again and again, each time removing about half the volume of the remaining structure. As this process goes to infinity, you are left with something that is still clearly cubelike in its structure, but has vanishingly little actual volume. It is almost entirely void, and almost zero stuff, like cotton candy.

I said before that I do not want to imply any particular structure, least of all a crisp, clean one, to our reality-model, but I think that it is something like the Menger Sponge, a fractal Swiss cheese. The gaps in our knowledge are vastly greater than the knowledge itself, and those gaps are both big and small, but the gaps can still lend structure to the model. That said, I suspect that in our cognitive architecture, the distinction between void and stuff is not as distinct. There are inferences we feel very confident about, and others that we know are as provisional as can be, and a large range in between.

Saul Kripke and others have used the example of the Roman orator who went by the names Cicero and Tully. The fact that there are two names associated with this one man, especially one about whom most people don't know very much, can lead to confusion. To what extent can we refer when we talk about Cicero/Tully? Can we make sense of "What if Cicero and Tully had been different men?" when we only have the vaguest sense that in our actual world, he was some famous Roman guy?

I think we can. I do not need to know the details about Cicero to know that he existed. He was a man, and so he had friends and enemies, favorite foods and a favorite color, things he was proud and ashamed of, etc. Perhaps no one on earth today knows about all these things, but we all implicitly accept that they once were there to be known. In certain contexts, those things were the defining characteristics of Cicero, perhaps even more than his famous oratory. This stuff is all included in our blank outline of "a person", including "a person who has been dead for a couple of thousand years". Like the Menger Sponge, the person in our head is mostly gaps. If we hear of Tully, and do not know that he is the same man as Cicero, we have a similar blank outline for him. We do not have to personally have access to a bundle of properties or descriptions of him, or even a single defining characteristic to have this boilerplate template in our heads, or to talk about him.

So if we have two different blank outlines, or almost blank outlines, or perhaps patchy, partially filled-in outlines, of Cicero and Tully, then someone informs us that they are the same person, what do we do? We just have to deploy our well-practiced skill of correcting our reality-model, and merge these two ghosts into one. Sometimes this reconciliation entails throwing out some of the inferences we had made to flesh out one or the other of the object outlines. Epistemology seems to have quite a lot of variations on this theme, as when we discover that London is Londres, Superman is Clark Kent, or Hesperus is Phosphorus. We mistakenly think we have two things, we create empty (or nearly empty) placeholders for each of them in our reality-model, tentatively fill them in as best we can, then discover we should merge the two. This is just a special case of the more general common problem of wrongness in the model.

We Take Our Model With A Grain Of Salt

And this brings me to my third point of emphasis about our reality-model, along with the fact that it is often flat-out wrong, and that it is defined more by its holes than its substance. And that third aspect is that as proprietors of our reality-model, since infancy, we are completely used to dealing with these first two aspects. On a daily basis, we cope with vagueness, incompleteness, misremembered "facts", and outright lies. We know, in our bones, that our model is provisional, and we have an ingrained sense of humility about this. Tinkering with it and correcting it continually is just part of the cost of doing business, improving the model where we can, when it seems worth the effort.

When we communicate with language, we know that we are doing so on the basis of these fallible reality models. I know that mine is gappy and wrong, and I know that yours is gappy and wrong, and that our gaps and wrongnesses don't match up. But I do more or less assume that the broad strokes will line up, and allow us to communicate successfully. Most of our communication has these caveats built in as background assumptions, and these caveats, this humility or agnosticism about the fine details, lends our language a certain vagueness. There is no problem of vagueness in reference, because vagueness is our stock and trade.

Internalism Isn't Right, Exactly, Except It Kind Of Is

This all seems to be nudging us in an internalist direction when it comes to meaning. (whenever we talk or think, we are doing so about our own internal data structures, when it comes right down to it). As I've said before, it isn't so much the case that internalism is right and externalism is wrong, so much as a question of: why would you want to talk that way? Is there some advantage in terms of truth, economy, or insight from characterizing meaning internalistically or externalistically? The externalist about representation and meaning says that my thoughts about the lawn mower are, in some fundamental sense, really, directly, about the lawn mower itself.

In contrast, the internalist says, well, that's certainly how we speak in everyday conversation, but if you want to get picky about it, my thoughts are really about each other, but some of them have the actual lawn mower in the garage as a causal antecedent, with the chain of causation involving my sense organs. As with the person who talks about molecules arranged in a tablewise manner, the internalist may be, strictly speaking, correct, but what a pedantic and cumbersome way of talking about the situation!

As philosophers, we get to define our terms any way we want. This is one of those times when we can either respect our pre-theoretical intuitions or carve Nature at the joints, as the slightly grisly cliche says. The internalist claims that our colloquial ways of talking and our everyday intuitions are just shorthand for what's really going on, which is a little more indirect and complicated than our intuitions and usage give it credit for. The internalist thinks that we are missing something important by being naive realists about meaning and reference, that there are (or at least might be) important distinctions that we would do well to remember as we explore. The basis of the claim that we should frame things in this clunky way is that there is something unique and/or mysterious about the way our thoughts interact that gives rise to, or constitutes, this phenomenon of aboutness. Whatever this is, it is analogous to, but importantly not the same as, stuff we already understand pretty well, like computation, information processing, and physics. In order to zero in on the stuff we need to figure out, we should not muddle it together with this analogous but different stuff.

If reference is of interest to a philosopher, it has to do with the way some thoughts relate to other thoughts. "Reference" and "intentionality" and "meaning" entail some unique and interesting mental happenings, above and beyond the redness of red. Like seeing red, these mental phenomena are actual, fundamental facts of the universe, and are worth exploring. This is an important part of the puzzle of the mind, the part that will allow us to put what it's like to see red together with what it means to think in the same big picture.

In contrast, the externalist is implicitly making the positive claim that whatever goes on in our heads is in no important way different than whatever goes on between the external world and our senses, and we can (and should!) lump it all together and call the whole mess "meaning" or "representation". This claim is at best premature, and a stretch, and, I believe, flat-out wrong (because of qualia and all the other stuff I've been saying). Even if you don't follow me all the way with that line of argument, we can be more precise, if a bit at odds with colloquial usage, if we construe meaning and representation in terms of the reality models between our ears, and not in terms of invisible magic meaning rays zapping throughout the universe.


Usage is right
Usage wins
All language is folk language
All language is slang

Reference: Picking Out

Philosophy of language has been quite an active field for the past century or so, and understandably there is considerable overlap between it and philosophy of mind. It is hard to talk about words, sentences, and their meanings without running up against questions about concepts, and how they are created and manipulated in the mind. Likewise, it is hard to ask how the mind works without running into questions about how it manipulates symbols and how the symbols it manipulates may affect its working in turn. We may not think entirely in words, but there seems to be a strong connection between the way we think and the way we articulate.

Philosophy of language chased its tail for a while during much of the 20th century, as it was taken over by science weebs. There was a great collective effort to "naturalize" the notions of reference, meaning, and a bunch of others, which effectively means an effort to explain them in reductive materialist terms. This effort strikes me as doomed, because it is a flail in the direction of admitting that there is something mysterious about semantics, while trying to ignore the elephant in the room, namely consciousness.

Moreover, most formal investigations into semantics and meaning are infected with a naive realism about meaning, bordering on Platonism. There often seems to be a pretheoretic assumption that a given term has a True Meaning that we may only perceive partially, with our use of the term muddled by our imperfect sensory apparatus, or limited cognitive abilities, and our incomplete scientific knowledge. It is assumed that, armed with a correct philosophy of reference, we would be in a position to determine what any given term really means.

There is no "really means". It makes no sense to speak of the meaning of a term unless you know who is doing the meaning and why. What does the user of the term know or believe about the term? What about the term is important to the user and the user's audience? What is the user trying to accomplish by using the term right now? What sorts of habits and objectives are baked into the speaker's entire notion of how and why they might coin and use terms in the first place? What are the preconditions that would have to hold in order for the user and the audience to be satisfied that the term was being used with only a tolerable amount of ambiguity? Note in particular that these preconditions might differ considerably from those that you might insist upon before you thought the term was being used unambiguously. Nor should we allow the limitations of a given language-using community's scientific knowledge impugn their use of the term, and their idea of what the term means.

People are sloppy with their terminology. Depending on context and audience, they use terms with varying degrees of precision. Some contexts call for more precision, and so people coin new terms. Technical fields are full of specialized jargon for this reason. Then again, sometimes circles of specialists will opt instead to appropriate a common term but use it in a more restricted sense than most people do sitting around the dinner table. There is something about the phenomenon of language, though, that beguiles investigators into thinking that all language could and should be made infinitely precise. There are urgent and interesting things about language and minds, but on the way to considering those things it seems that few make it past the rocks, lured by the siren call of theories of Platonic infinite precision.

Extension

Terms are about things. "Water" refers to, is about, water. "Cat" is about a cat, or cats in general. So far, so good. The stuff out there in the world that a term "picks out", the actual cat(s) or the actual water, is called the extension of the term.

There are aspects of meaning that are not done justice by simply pointing out the extension of a term, however. Often there is, implicit in a term, not just what the term actually refers to, but how it refers to it as well. One of the most well known examples is that of renates and cordates. Renates are creatures that have kidneys, and cordates are those with hearts. As it turns out, everything that has a kidney has a heart, and vice versa. So "renate" and "cordate" both have the same extension; they both refer to exactly the same set of actual animals. Nevertheless, it should be intuitively clear that the terms do not have exactly the same meaning. One can imagine a creature that is a renate but not a cordate, or a cordate without being a renate. The terms "renate" and "cordate" have perfectly distinct meanings, and it seems like an accident of nature that they happen to coextend.

Intension and Possible Worlds

If extension is the actual stuff that a term picks out, intension is how the term picks it out. Intension is the questions a term asks the world before it decides that some aspect or part of the world is denoted by that term or not. (If all this anthropomorphizing terms themselves seems a little suspect to you, rest assured, I couldn't agree more. Bear with me.)

To capture and formalize the idea of intension, philosophers have come up with possible worlds scenarios. Renates and cordates are the same creatures in our world, but there are possible worlds in which some renates are not cordates. To put it in mathematical terms, intension is a function of possible worlds to extensions in each world. That is, to nail down a term's intension, you let your imagination range over all possible worlds, and for each possible world, you determine what the extension of the term would be in that world. When you are done, you have the original (infinite) set of all possible worlds, and for each one, the extension of the term in that world. The resulting (infinite) set of pairings completely captures the term's intension, which comes much closer to the term's meaning than simply specifying its extension in our world. Got that?

This talk of possible worlds has always struck me as a clunky and extravagant way to talk about why we use the terms we use the way we do. Surely when ordinary language users use a term like "renate", infinite sets of possible worlds do not actually play any role in their mental processes. I detect a whiff of Platonism - the faith that reference is something real (albeit non-physically, or metaphysically real), something we could have theories of, theories that could be objectively right or wrong independent of our mental processes. Be that as it may, if infinite sets of possible worlds seem a bit unwieldy, hold on - it gets worse.

Moreover, talk of possible worlds often seems to assume that picking out the extension of a given term on a particular possible world is unambiguous. It is always admitted that on some worlds, a term just might not have an extension, but on the ones in which it does, there are generally seen to be no real problems picking it out, and there are no real problems telling which are the worlds in which the term has an extension in the first place. All that matters is the final answer: that crisp, neat mapping of possible worlds to extensions that defines the intension.

If possible worlds are interesting fodder for speculation at all, it is because of the ambiguous cases. Are terms defined absolutely, because of some inherent essence of the thing described? Or are terms (and concepts, for that matter) defined relationally, in terms of their functional interactions with other things? Was John Muir right: "When we try to pick out anything by itself, we find it hitched to everything else in the Universe."? To the extent that we admit that our idea of what a thing is depends on its relations to other things (perhaps even, transitively, all other things) any change a possible world exhibits from our own puts the burden of proof on someone who claims that a term is directly transferable from our world to that possible world. Could there really be Pepsi worthy of the name in a world with no Coke? Most people would say "probably", but it gets tricky depending on context.

Who Is "Albert Einstein"?

In how many possible worlds is there an extension of the term "Albert Einstein"? What if there were a world just like our own, but the man we credit with discovering special and general relativity, and who adorns countless dorm room walls, was named Albrecht Eisenstein? What if there were a man named Albert Einstein who was raised exactly as our Einstein, in exactly the same family, with exactly the same genetics, but who made his living as a piano tuner, never entering the world of science at all? What if Albert Einstein discovered relativity, but was a blond Englishman? What if, in addition, his name was Edwin Chillingsworth? In how many of these worlds (and any of the others that we could come up with for hours and hours) can we definitely pick out the extension of the term "Albert Einstein"?

It depends on the kind of conversation we are having. Sometimes, even with proper names (the paradigmatic examples of what are called "rigid designators"), we are speaking more abstractly, sometimes less. Moreover, when we speak abstractly, or figuratively, we do not always carry out our abstraction along the same axes, abstracting away the same kinds of details as we might at other times, in different conversations.

Let us imagine, for example, that there is an alliance of advanced civilizations that calls itself the United Federation of Planets. This Federation never makes overt contact with a newly developing civilization until that civilization is on the verge of inventing warp drive, which would allow the civilization to explore the cosmos. In the midst of clandestinely monitoring an emerging civilization, a Federation captain might have a conversation with his First Mate in which he asked, "Have they had their Albert Einstein?" This might be a slightly awkward way to phrase the question, but nevertheless it would be reasonably unambiguous, and the First Mate could answer "yes" or "no", perhaps following up with some detail as to the exact state of the civilization's scientific development. Obviously, the captain was speaking somewhat abstractly. He did not mean to ask if the civilization had produced a wild-haired, slightly comical man born in 1879 in Ulm, Germany. If the planet being watched was populated with gelatinous green blobs that communicated through their highly developed sense of smell, and had no ears or eyes, the First Mate could still perfectly truthfully answer "yes" to the captain's question. The captain is interested in certain of Einstein's characteristics, but not others.

On the other hand, what if Mileva Einstein (Einstein's first wife) found herself sucked out of our universe through a wormhole and ended up on the bridge of our Federation starship. Once it was clear that she had no hope of ever returning to her own world, she might ask, "Does this world have an Albert Einstein?" She would not take yes for an answer if the Albert Einstein being referred to was a gelatinous green blob that had discovered relativity. She might very well, however, take yes for an answer if the Albert Einstein were our piano tuner. She is also speaking abstractly, but she is abstracting along different lines than the Federation captain is. Both abstractions are perfectly valid, in their respective contexts.

So it is not enough to think that we may speak of something either abstractly or specifically. It is not even enough to see that we may speak more or less abstractly, along a continuum. In different contexts a term may be abstracted along different lines, in a continuum, holding different properties as essential. That is, we cannot even talk about speaking abstractly, even allowing it to be a matter of degree rather than admitting discrete states, unless we know who is doing the abstracting and what their interests are, what they consider essential properties of whatever it is they are talking about, and what they assume their audience will consider essential properties.

Ours is the only world we are forced to deal with, and it quickly becomes clear if someone is flat out using a term to refer to something that other people would not use the term to refer to. But as soon as we enter the realm of possible worlds, we open the door to legitimate disagreements, for a given world, as to what constitutes the extension of a given term. Once we start hypothesizing in this way, it is often by no means obvious whether the extension of a term exists, or exactly what its extension is in a given world. There may be no way, even in principle, of answering these questions absolutely, depending on the context of the usage, and depending on who the speakers and listeners are, and what their interests in communicating are. It is these sorts of inherent ambiguities that possible worlds scenarios should get us talking about, but which most possible worlds thought experiments ignore. One of the most well known such thought experiment is Hillary Putnam's Twin Earth.

Putnam's Twin Earth

In his widely cited paper, "The Meaning of "Meaning"", (1975) Hillary Putnam argues against the sort of internalist characterization of meaning that I argue for. Putnam's most memorable example is a possible world scenario involving a hypothetical Twin Earth. Twin Earth is just like our Earth, perhaps even including a twin me and a twin you, with one exception: on Twin Earth, the substance that they call "water", while drinkable, odorless, transparent, and in all other "superficial" ways identical to our water, is really not made of H2O. It is instead made of some other chemical compound, that Putnam abbreviates as XYZ. The question that presents itself immediately, of course, is whether or not XYZ is really water.

Putnam flatly asserts that it is not. If water is H2O, then the extension of the term water is the set of all quantities of H2O, anywhere in the universe that they occur, and nothing else. Anyone who uses the term water in such a way that it has a different extension is simply wrong, or is essentially speaking a different language than English as it is spoken on Earth. Putnam's main point is that, as he put it, "meaning ain't all in the head". My twin and I may be in identical mental states as we use the term water, but we mean different things by virtue of the fact that our respective uses of the term water have different extensions. For Putnam, the meaning of a term depends crucially on its extension.

Putnam also says that before about 1750, no one knew that water was H2O, even though it really was. If it turned out that some, but not all, "water" on Earth was really XYZ, it would thus turn out that people who had referred to quantities of XYZ as water (the pre-1750 people) were wrong all along. Putnam claims that the usage of pre-1750 speakers of the term "water" to denote XYZ would be retroactively invalidated by future scientific discoveries, even though they lived and died in a community of speakers, listeners, and readers who used the term with unanimous and unambiguous (to them) agreement as to its meaning. I find this claim downright bizarre.

Usage is right. Usage wins. All language is folk language. All language is slang.

Water is a cluster concept - a collage of properties, memories, associations, nuances, connotations, descriptions, expectations, and "scripts" or algorithms for dealing with particular types of watery situations. All of the elements of this collage tend to be correlated in our world, so we draw a line around them with a purple crayon and slap a label on them, water, and go about our lives. We don't have to consider the relative importance of the different elements of the collage (in terms of being defining characteristics of the collage) until some clever philosopher contrives a fanciful thought experiment, and asks us to consider the collage if one of its elements were removed or changed.

In Putnam's thought experiment, the element that is swapped out is the fact of water's microphysical constitution, a fact that most of us learned in high school but which has little impact on our day-to-day lives. I suspect that many of our concepts are loose aggregates in this way, and that because their separate components or properties tend to be correlated in our experience, we assume that the entire cluster is much more tightly integrated than it necessarily is.

How many things could turn out to be different about water before you really felt that you could no longer call it "water"? Do you know how heavy water is? What if it were a hair heavier than you thought or a hair lighter? What if it had some magnetic properties you had somehow managed to avoid hearing about until right now? What if you just read that in certain fields, generated in high-energy physics laboratories, water turned orange and viscous like maple syrup? These things might surprise you, but they would hang like Christmas tree ornaments on the core concept "water".

Other, more abstract concepts are more tightly integrated in our minds. For instance, there are no superficial properties of the concept "three". There is not a thing you know about the mathematical concept of three that you could change without inarguably wrecking the whole thing. If you change a whisker on three, it just can't possibly be three anymore. Water might glow in the dark, (but only in the southern hemisphere during a lunar eclipse) and possibly still be water, but a number that is exactly like three but not prime just isn't three.

Because we on Earth have only ever been exposed to water as H2O, we have not had to consider the possibility, but perhaps we have a "big tent" concept of water. Maybe water is multiply realizable, like the term building. Buildings, after all, get to be buildings by virtue of their use, their functional characteristics, but can actually be constructed out of a great many things. We think of water as being H2O, because that is the only kind we have run up against, but maybe water made out of XYZ would not faze us.

On the other hand, we have strong intuitions that what something is made of, even if we can't see it and have no direct evidence of it without sophisticated equipment, has a lot of authority in deciding what it really is. So maybe XYZ isn't water after all, and the microphysical constitution element of the collage trumps all the others. I don't know, and neither does Hillary Putnam. The question is a sociological one, not a philosophical one. We could send a colony to Twin Earth, give them full knowledge of the chemical difference between Earth water (H2O) and Twin Earth water (XYZ), and let them go for a generation or two, and check back to see if they call both substances water or if they have come up with another term for the XYZ kind of water. Maybe they all use the term water for both kinds of stuff, but every now and then an annoying pedant among them corrects people, the way some people tend to compulsively point out split infinitives. Maybe both H2O and XYZ get to be called water in everyday conversation, but the scientific journals use some long Latin names for the chemical formulae on those rare occasions when they need to differentiate between the two. However they go, there's your answer.

We cleave our concepts along lines that are important to us. Microphysical constitution is important to us, so it gets a relatively high ranking. We have found it useful or satisfying in some way to let this criterion determine the extension of water. We have been told a very plausible physical story about the world around us, one involving atoms and molecules, and we believe it (for good reason). So when we make distinctions among the things in our world, we tend to give credence to distinctions rooted in this story. The point is that any authority or importance microphysical constitution has in determining whether something is water or not derives from our goals, rules and conveniences, and not from any immutable natural laws or any Platonic Meaning Of "Water".

Water, as it exists out there in the world, is H2O. Water is made of H2O; water, the substance, just is H2O. But this scientific fact about the world does not mean that the essential meaning of our term "water", or our concept of water, is H2O. Whatever role the water concept plays in our cognitive economy, whatever associations and relations it may have with other concepts, may be pretty unaffected by the facts about water's microphysical constitution. In terms of how I use the concept of water in my everyday life, and how I use the term "water", the fact that it is made of H2O may well be a rather obscure piece of trivia. To assume that the reductive taxonomies of the hard sciences map precisely to our cognitive structures is scientism, pure and simple: "Now that we've figured out the science, we can finally refer correctly!". Meaning, as we create it in our minds, might not work like that.

Saul Kripke

Saul Kripke, in a series of lectures collected in "Naming and Necessity" (1972) notes that at some point scientists figured out that whales are not fish, and that is really the right way to talk about it. They did not change the standard usage of the words "whale" and "fish"; they corrected the standard usage. Moreover, most reasonable people at the time would quickly acknowledge this, upon being told of the biological details involved. This is because, as Kripke says, an interest in natural kinds was built into the original enterprise of classification. When people coin and use terms, they like to think that they are thereby distinguishing fundamental types. Distinctions made in terms of our current best story about what it means to be a fundamental type are ones we like to formalize in our language. Right now, for most of us, that story is the one about microphysics.

Kripke defends exactly the sort of Platonic understanding of meaning that I argue against here. His main target is what he called the Frege/Russell understanding of meaning, which he characterizes as identifying a term with a bundle of descriptive properties. I said above that water is a cluster concept. Kripke says that Frege and Russell would agree, and they would identify "water" with the cluster. That is, to Frege and Russell, the term "water" is just a shorthand for that cluster of properties. A consequence of this, according to Kripke, is that if some of the properties in the cluster turn out to be invalid, the whole term must be thrown out. Kripke's take on Frege/Russell semantics is that the cluster does not have one of those clauses that lawyers stick into contracts saying "even if some clause herein is found to be invalid, the rest of the contract is still in full effect.".

One of Kripke's examples involves gold. One of the properties of gold is that it is a yellow metal. According to Frege/Russell semantics (as characterized by Kripke), this is a definitional property of gold: it is one of the things that makes gold gold. What if, due to some highly implausible optical illusion, it turned out that gold was blue, and had been blue all along, but we had only thought it was yellow? Kripke rightly points out that we almost certainly would not say that since gold had been defined (among other things) to be a yellow metal, this new discovery means that gold does not exist, and we have some new blue metal in its place. Rather, we would just say that it turns out we were wrong, and gold is blue, not yellow.

Kripke says that when we link a term to a cluster of properties, we are not identifying the term with the cluster. Rather, we are fixing a reference with the cluster. When we coined the term "gold", we referred right through the superficial properties by which we identified gold, to the actual thing or stuff that (as it were) lay behind those superficial properties. Any of the superficial properties could thus turn out not to be actual properties of the stuff at all, and that would not affect our reference. Stretching the point a bit (but not too much - he produces some pretty compelling examples), Kripke suggests that all of the properties in the cluster could be not real properties of the referent, and the reference would still hold. We may use the cluster of properties to identify the thing referred to, but it is implicitly understood by all users of the term that the properties themselves are somewhat provisional, that the important thing is whatever it is that we (for the moment, anyway) believe possesses the properties. The properties are not the thing itself, but just a way of pointing out the thing.

This is a good example of the Platonism I spoke of earlier. The properties are the shadows on the cave wall, pointing in the direction of the reality that lies behind, or beyond the (mere) superficial cluster of properties. Kripke confronts head-on my claim that the coiners and users of a term ought to have the final say in deciding what counts as being picked out by that term. He illustrates his point using the common example of Hesperus and Phosphorus.

Hesperus And Phosphorus

"Hesperus" and "Phosphorus" are the terms the ancient Greeks used to denote the evening star and the morning star, respectively. Although the ancient Greeks (before Pythagoras, anyway) did not know it, both were actually the single object we now call the planet Venus. Kripke says that Hesperus and Phosphorus just are Venus, and always were from the moment the terms were coined. There may be worlds in which Venus does not exist, but there is no possible world in which Hesperus and Phosphorus are different objects from each other, or anything but the planet Venus.

Now I can imagine a possible world in which there are two distinct objects in the sky. Let us call them (with apologies to Dr. Seuss) Thing 1 and Thing 2. I bet I could arrange this world in such a way that if we were to teleport the ancient Greeks to that world, they would accept that Thing 1 is Hesperus and Thing 2 is Phosphorus. We should think long and hard before we say that the Greeks are simply wrong to call them that. They coined the terms, after all, to make distinctions that were important to them in their lives. They lived and died happily in their use of those terms. They used them with perfect (as far as their purposes were concerned) unanimity and specificity as to their meaning. I think that this gives them a fair amount of authority in deciding what the terms mean, and if they decide that Thing 1 is Hesperus and Thing 2 is Phosphorus, you had better make a very good case that they are wrong.

It is not enough to point out that the ancient Greeks' scientific knowledge was wrong or incomplete. That is not what is at issue here. They would probably have changed their terminology if they had figured out that Hesperus and Phosphorus were both Venus. But for now I am interested in the ones that never did know that, and their use of their terms that they invented to make sense of their world as they experienced it and thought about it. They used the terms, and the terms had meaning for them. How did this meaning work?

Kripke says that the Greeks had ways of identifying Hesperus in the sky, and ways of identifying Phosphorus. But these clusters of properties, these ways of identifying them, are not what Hesperus and Phosphorus were, even to them. By coining the terms, the Greeks were fixing a reference to Venus, even though they did not know it at the time. In effect, they referred right through the properties by which they identified Hesperus and Phosphorus, to the actual thing behind them, namely the planet Venus.

Kripke's arguments have some intuitive appeal. But rather than argue about whether the Greeks were really using "Hesperus" as shorthand for a bundle of observed regularities in the sensory input they received from their environment, or they were really fixing a reference to Venus, I'd like to take a step back and ask: on what basis could either claim be right or wrong? By virtue of what, exactly, can Kripke say that the Greeks were fixing a reference rather than identifying a cluster of properties?

When the Greeks used the term "Hesperus", did they thereby instantly pick out something several light-minutes away, and if so, does this process of picking out violate relativity theory by traveling faster than light? Could we verify or disprove Kripke's claims by building a device to detect the invisible meaning rays that connect a user of the term "Hesperus" to Venus? Of course not. Reference is not an actual physical process that happens in the real world. So if reference is not a process of physical causation, what is it? It is nothing. Nothing, that is, except some (admittedly mysterious) stuff happening in the mind. If you hear me use the term "water" (more physical causation, involving vocal chords vibrating, waves of pressure moving through countless air molecules, pushing on an ear drum, etc.) then I induce some stuff to happen in your mind. Some of this mental stuff may include certain "raw feels", expectations, equivalence relations and tests, and who knows what all else. But it is mental stuff, in the mind only.

The only real questions about semantics concern what minds do under the influence of terms, both internally and externally generated. Put another way, once God created all the physical facts of the universe, as well as the facts about consciousness (or, depending on your outlook, including the facts about consciousness), there was no more work for Him to do to create all the facts about reference. Except insofar as it reflects something about how minds work, reference is an explanatorily useless concept. Moreover, I see no reason to think that it constitutes any kind of phenomenon in need of explanation beyond straightforward physical causation (except, again, insofar as it is a product of conscious minds, in which case it is very much in need of explanation, as are all conscious phenomena). So if reference is not a physical phenomenon, and does not even supervene on physical phenomena (reference travels faster than light, after all), and reference is explanatorily useless and does not itself constitute an explanandum worthy of the name, how is it that anyone could have a theory of reference that they claimed was "right", and that other theories were "wrong"?

What does Kripke himself cite as the final authority to back up his claims about fixing references? He produces some good examples (like the blue gold described above) that incline us to think that his claim about "fixing a reference" accords with our intuitions about the way reference ought to work. Is this enough to convince us that reference really does work that way, though? Ultimately, Kripke seems to think that his particular Platonic notion of reference goes through because we want it to. Perhaps it isn't so much the case that Kripke thinks that this Platonism is objectively true of the universe, but rather that it holds true because all language users are Platonists at heart. As Kripke puts it, a desire to classify things into categories of natural kinds was built into the original enterprise of language use. We all go about our lives knowing that whatever clusters of properties we use to identify things are somewhat ad hoc, and subject to revision if we come across evidence that the underlying reality is different than what we thought it was.

When phrased this way, assuming I haven't misunderstood and/or misrepresented Kripke, his arguments are not so different than mine. This reference-fixing, the Platonism, is not an actual feature of the universe, it is a fact about how our minds work, and our needs and desires with regard to language construction. We want to classify the world in certain ways, so we build that imperative into our notions of reference. The final authority for deciding that water is really H2O, then, is our goals and intentions in using language in the first place, and that's why the Greeks were really referring to Venus even though they didn't know it.

Unfortunately, I think this charitable reading does misrepresent Kripke. While he does talk about our desire to classify things in a certain way, it is pretty clear from the absolute way in which he phrases his claims that he thinks of reference as a really-there, actual fact of the universe sort of thing, in a robustly externalist way. It is necessary that Hesperus is Venus, and it is necessary that water is H2O, and the Greeks would be wrong to call my Thing 1 Hesperus, not because of caveats and codicils they had written into their original charter establishing the goals and rules of their particular linguistic enterprise, but simply because they would be absolutely, objectively wrong, and that's that.

Modes Of Presentation

Sometimes the notion of modes of presentation is invoked to solve problems like the Hesperus/Phosphorus situation. The idea is that while Lois Lane knows that Superman can fly, it would surprise her to discover that Clark Kent can fly. But Clark Kent and Superman are one and the same person (that is, the term "Superman" and the term "Clark Kent" have the same extension), so in some sense the claim that Superman can fly and Clark Kent can fly should convey exactly the same information. They both make the same claim about the same individual. To resolve the apparent conflict, it is argued that any given claim must be understood under the proper mode of presentation. Superman and Clark Kent may in fact be the same collection of molecules, but facts about them are subject to their mode of presentation.

I find talk of modes of presentation very fishy. As far as I can tell, "modes of presentation" is just a way of covering for incomplete or incorrect information. Lois Lane knows that Superman can fly but would be surprised to find that Clark Kent can fly because she walks around with an erroneous model of reality in her head in which Superman and Clark Kent are two distinct individuals. She has drawn incorrect inferences about the world. She has, in fact, been deliberately and systematically deceived by the individual who is both Superman and Clark Kent. Hundreds of issues of the comic over the years have been devoted to the elaborate machinations he employs in order to lie to Lois.

In the same way, sometimes you read about Pierre, who has read that London is a beautiful city, one he would like to visit one day, but who once had to take a business trip to an awful, drab and smoggy place called Londres. We are told that Pierre has been exposed to the same city in two different modes of presentation. I prefer to say that Pierre's model of the world is simply wrong. He thinks there are two cities, when in fact there is only one. Once he has this one incorrect "fact" in his reality model, he fleshes out his placeholder templates for these two cities with a whole lot of provisional details, or knowledge of the fact that the details are missing, and he bases his expectations, desires, beliefs, etc. on this incorrect model of the world. Maybe someday he will correct the mismatch between his internal model and external reality, maybe not. Either way, there is nothing deeply mysterious about any of this.

Any problems in thinking about these situations stem directly from the intuition of the invisible magic meaning rays that connect our thoughts and references with the outside world - the idea that reference is exclusively or even primarily some kind of instantaneous connection between something in our thoughts (or Lois Lane's thoughts or Pierre's thoughts) and the outside world. I do not know exactly what reference is or how it works, but if it is to have a precise meaning at all in the sense of being philosophically interesting or useful, it must be defined as a relationship of some kind between thoughts. Lois Lane's term "Superman" refers to a Superman data structure (or, if you prefer, "concept") in Lois's mind. There is nothing problematic in saying that for Lois, the claim that Superman can fly and the claim and Clark Kent can fly convey very different information because for Lois, the data structure "Superman" is simply a different one than the data structure "Clark Kent". She formed both by drawing inferences from lots of perceptual experiences she had. The data structures then contribute to her expectations of the kinds of perceptual experiences she is going to have in the future.

The Contents Of Our Thoughts

An idea closely related to invisible meaning rays and Platonic Meaning is that of the content of our thoughts. Many writers use the term with confidence that it has meaning, then go on to spend a lot of effort trying to analyze it and figure out what the content of our thoughts is, or whether content is narrow (dependent on one's internal state) or broad (dependent on one's state plus the state of the world). It always seems to go without saying that there is some fact of the matter. The content of a thought is a lot like the extension of a word. It is whatever the thought is "about". I find the term at best to be a strong pretheoretic nudge in a particular direction, and at worst grossly misleading.

I may have a Honda Civic, i.e. a vehicle. If I put a cake in the Civic, then the cake constitutes the contents of the vehicle. I could have put the cake in a different vehicle, in which case that other vehicle would have had the same contents that the Civic now has. Or I could have put some old newspapers in the Civic, in which case the same vehicle would have different contents. The vehicle is blank, empty, until I put some contents into it. These are the sorts of images and relationships we drag into play as soon as we invoke the highly loaded term "content". I have thoughts, that is all. As far as I can tell, I have no separate "contents" of those thoughts.

Putting Meaning Back In The Head

Extension is the stuff in the universe that a term "picks out". Of course, terms do no such thing. With apologies to the National Rifle Association, terms don't pick things out, people do. Extension seems like a reassuringly concrete idea: the extension of the concept of water is a set of actual molecules out there in the actual world. But extension is not so clear cut. Putnam allows that determining extension requires an equivalence relation. We can not specify all the occurrences of water on Earth without having a way of saying "all the stuff that is equivalent to this stuff here in this glass". This equivalence relation, the criteria we use to decide if something is water or not in various real or imaginary scenarios, is the intension. Extension is supposedly concrete, while intension is rather more abstract (remember that intension is a function that maps possible worlds to extensions on those worlds).

But you can't get to the extension without going through the intension. Thus extension is itself something of an abstraction: we can never, in practice, enumerate all the molecules of water in the universe, so we can never actually pick out the extension of the concept of water. We are always at a certain remove from anything's extension; all we really have at our immediate disposal is intension. All we can really do is talk about the general kinds of things we would consider water. What we really are talking about when we use the phrase the extension of water is a bunch of tests we can apply to different situations, ways of applying some equivalence relation. Importantly, we apply those tests, we pick out the water. By itself, a term just sits there.

How do we know about water's microphysical constitution, anyway? Most of us simply read it in a book or were told it in school and accept it. Some of us ran tests with instruments. Originally, sometime after 1750, someone ran such tests, and inferred the microphysical constitution of water from the results of those tests. But the results themselves, the raw data, are functional properties of water, facts about how water behaves in different circumstances. These sorts of properties are no different in kind than the results of the "tests" I run when I smell water, dip my hand in it, taste it, etc. The fact that in one case the instrumentation involved was built by people, and in the other case the instrumentation consists of devices I was born with (tongue, fingers, eyes, etc.) does not make any difference in terms of the type of property of water we are talking about. For Putnam's Twin Earth thought experiment to go through, there must be at least some "superficial properties" of H2O and XYZ that differ. Otherwise, how would any scientist ever have told the difference? At some point, if you feed H2O into a mass spectrometer you get one result, and if you feed XYZ in, you get a different result. Different raw data equals different "superficial properties", just as much as if H2O and XYZ tasted different.

I suppose someone could still insist, for the sake of the argument, on hypothesizing a substance that behaved exactly like H2O as far as current science was able to determine, but which really was not H2O. I could take the standard cop-out that people sometimes take with thought experiments and demand details. I guarantee that no one could possibly specify such a situation at any satisfying level of granularity.

But the standard cop-out would lose a larger and more important point. It is in principle, literally nonsensical to speak of something that behaved exactly like H2O, and wasn't really H2O. As I and lots of others have pointed out, science doesn't really claim, at heart, to tell us what is really going on out there in the world. It only specifies a bare schema, a circularly defined pattern of functional dynamics, but it is silent about what is doing all that functioning. To act exactly like an electron is to be an electron. There is no such thing, by definition, in principle, as something that acts exactly like an electron but really isn't an electron. By the same token, there is no way something could behave exactly like H2O but somehow not be H2O.

When it comes right down to it, our relationship to the outside world is entirely functional. That is, we know everything we know about the world because of the world's dispositional properties, its behavior. Water is as water does. There simply is no essence of water that does not manifest itself functionally, at least none we could ever know, even in principle. Any time we speak of reference with regard to something out there, we are talking about reference to a bundle of functional dispositions. This is functionalism turned on its head: it is not the mind that must be understood in functionalist terms, but the world. It is incoherent to speculate that XYZ and H2O do not differ in at least some "superficial" properties. The microphysical constitution that Putnam regards as the sole determinant of true wateriness is a story that we inferred from various superficial properties.

Now I happen to like that story. It is remarkably powerful and parsimonious in its ability to link all kinds of phenomena in the world, confer cognitive power upon us, organize our mental economy efficiently, and ultimately, help us invent microwave ovens and rocket ships and all sorts of other things. But it is not the only imaginable story.

We should steer clear of the assumption that the pre-1750 people used their rough and ready conception of water only provisionally, and that they were waiting for science to tell them about water's microstructure so they could be more precise. Pre-1750 people, whether or not they had ever heard of Aristotle, were basically Aristotelians. They already knew the elemental constituents of water - namely water. Water was simply one of the basic kinds of stuff their world was made of, and most people didn't question whether or not water might be made of anything still more basic. Their understanding of science was wrong, but their ability to refer was working just fine.

What if there were a prescientific tribe of people somewhere that had two words for water. Water referred to the water from the river, that brought life and was good and blessed by the gods, but shwater referred to the evil water from the spring that was cursed. No explaining that water was chemically identical to shwater would make them change. Microphysical constitution is just an unimportant property to them compared to the essential goodness or evil of the water/shwater. The goodness or evil determines what the substance "really is". Perhaps they are not prescientific, and they understand about chemistry and H2O, but still hold their religious beliefs, with full acceptance that there is no empirical basis for them. They have chosen a different property, a different element of the collage to define the essential nature of water/shwater.

Lore has it that the Eskimos have 100 words for snow (the actual number seems to vary a lot depending on where you read this old chestnut). Let us imagine that one of their 100 words is spelled and pronounced exactly like our word "snow". This is like the situation with the pre-1750 people calling both XYZ and H2O "water", only with us playing the part of the pre-1750 people, riding roughshod over what to others (the Eskimos in this case) are important distinctions. We aren't right and the Eskimos aren't right. We all just make the distinctions that are important for us to make, and we don't waste time coining a lot of extra terms to allow us to split hairs we don't have to split. A term is only as precise - can be only as precise - as is necessary to make the discriminations of interest to the community of users of the term. I (or my culture) define terms in the interests of setting up my linguistic palette to get the maximum cognitive or communicative bang for the buck. There is no right or wrong answer as to the narrowness or breadth of my definition of the term "water".

If you ask me as an English speaker if XYZ counts as water, I may think for a moment or two then give you my opinion, which I made up just then. I may then give you arguments for my opinion, that you may or may not accept. My opinion may or may not be in accord with that of the majority of the rest of my linguistic community. It may or may not even be in accord with the dictionary definition of the term "water". But my answer is still just something I made up. Of course, that is what all language ever is - at some point, someone just makes stuff up, and other people adopt it in their speech. If, on the other hand, you ask me as a philosopher if XYZ really counts as water, I'm afraid I would have to ask you to rephrase the question, because as stated it is too loaded with presuppositions to admit a yes/no answer.

There is some stuff out there in the world (water), and our interactions with it have lead us to attribute some "superficial" properties to it. We also have a story in our minds, an explanatory framework that we have found to be very useful (our current physical theories about atoms and molecules and such). Some of this stuff's superficial properties have lead us to infer that it fits comfortably into a particular place within this explanatory framework. The success of a particular scientific theory or another does not absolutely (and retroactively!) determine meaning. Whenever we have a collage of data (superficial properties), we infer a story to bind it all together. The story is the purple crayon we use to demark the collage. It is this story that we cling to as the determinant of meaning, the crucial defining characteristic of each of our concepts. It determines the equivalence relation, the intension, that in turn determines our tests for inclusion in or exclusion from the extension. This story, and thus meaning itself, is in the head.

The main point here is that the story about molecules and such, the explanatory framework, is entirely in our heads (although there is a strong likelihood that there are things out there whose dynamics map nicely to this framework). We can not say what anything "really is" beyond where it fits into our explanatory frameworks based on its observed "superficial" properties, which is to say, based on certain sensory experiences we have had. Speculation to the contrary is the kind of pursuit that gives philosophy a bad name.

So what is going on in our minds when we use the term "water", either saying it, hearing it, or thinking it? That is the million dollar question. A very interesting question, yes, but a question about what is going on in here, in the mind, and not a question about any notion of "meaning" beyond that. I have characterized the concept of water as a cluster, a collage, but I have said that it involves equivalence relations or tests we apply to situations, and that it is delimited by a story that we infer from experience. Obviously this all needs a lot of clarification. Do I even have one single thing in my mind that I can call my concept of water? Does it, strictly speaking, have a fixed identity that persists over time? If so, how much of it can you change before you must call it a different concept altogether? Do concepts subsume other concepts? What part do qualia, the what-its-likeness of water's wetness, its (lack of) taste, etc., play in all of this? How much relative weight does Kripke's project of language use (that of dividing things into categories of natural kinds) have? These are the truly interesting questions about the limits of the meaning of the term "water", but these are all straightforwardly questions about minds. There is a lot of stuff going on in our heads and it will take considerable work to sort it all out.

One thing we can speak of with confidence, however, is the relationship between all this mysterious stuff happening in our heads and the outside world. We do not directly perceive matter. There is a long, twisty causal chain that links certain events that happen in the physical world with percepts and concepts in the mind. Or perhaps more suggestively, our concepts and percepts are constrained or influenced by these events. Until we understand the concepts in our heads better, the details of the influence of the external events upon them will remain murky, but the input channel itself is pure good old fashioned physical causation.

Of central importance to any discussion of language and meaning is the notion of intentionality. Intentionality is the property of being about something else; it is sometimes informally defined as "aboutness". Beliefs, desires, and propositions all have intentionality, rocks and teacups do not. Intentionality is real, it exists as a feature of the universe. There are some things that really are, inherently, about other things. All such things, however, are exclusively in minds. In a purely objective, extrinsic, materialistic world, everything that happens does so strictly according the the laws of physical causation, like so many beer cans perched on fence posts hit with rocks. No matter how many beer cans you have, and no matter how they may be connected (with dental floss, perhaps) there is no inherent sense in which some set of them "come together" to be about another set of them. They just do what they do because they must, each of them blind to all of the others, with no subset of them "representing" other subsets (or anything else for that matter) except insofar as we choose to see them that way with our conscious minds. Sometimes it is convenient for us to speak and think as if things out there were really about other things (road signs about gas stations, for example), but this is a may-be-seen-as kind of thing, a way of talking about what is, at heart, lots of complex physical interaction. Left on their own, the mechanics of the physical road sign, and its interactions with photons of light, up to the point at which those photons interact with your nervous system, are well understood without recourse to any notions of "reference".

So we have 1) molecules of stuff somewhere out there in the world in our rivers and streams. These molecules cause physical events to occur, which cause still other events, etc. until some event(s) in this chain ultimately impinge in some way upon 2) some mysterious things happening in our heads; and finally we have 3) our observable linguistic behavior, which presumably is caused or influenced by 2). We have a long way to go before we understand 2) and the exact relationship between it and 1) and 3), but once we do understand these things, there will be nothing left to explain about language and meaning.

It is sometimes said that meaning is merely mediated by causal connections between the outside world and our minds. I, however, would say that meaning just is those causal connections, plus some mysterious stuff happening entirely within the mind. Any talk of meaning beyond this has no explanatory or predictive power at all. There simply are no facts about the universe, either extrinsic, third-person "scientific" facts, or subjective phenomenal what-its-like-to-see-red-type facts, that are explained by assuming invisible magic meaning rays connecting our thoughts to trees, cars, and the Milky Way galaxy. The causal chain between physical events that happen in the world and the concepts we form in our minds may get very complex, but it is still just billiard balls knocking together. There is no other kind of connection between the stuff out there and our concepts in here. The problem with the term "extension" is that it strongly inclines us to believe that there is. It presumes a sort of spooky mystical connection between the collection of molecules of H2O in the universe and our internal concept of water. There is no such connection.

When someone points, they are telling you to do something - look over there. A reference is a pointer, and as such, it is prescriptive, not descriptive. It commands. Even this, though, gives it too much credit. It doesn't actually do anything - it just sits there. It is a lot like an algorithm in this sense, and in fact is a degenerate case of an algorithm. As such, unto itself, it is neither true nor false, it neither represents nor misrepresents, it just does its physical interaction as do all physical things. If intentionality is to be a really-there thing at all, it is a spooky, mysterious, in-the-mind-only kind of thing, like the redness of red. Like redness, it really exists, but in order to account for it properly we will have to overcome our unease at its spooky mysteriousness.


Reference: Turning Out

Two Dimensional Semantics

Two Dimensional Semantics is getting some attention these days. Chalmers has been writing about it, as have other people. The motivation for 2D semantics is the opinion that intension alone, characterized in the possible-worlds sense, does not quite capture meaning. Specifically, there are terms whose intension is the same (i.e. the terms pick out the same extension in all possible worlds), but that seem as though they have different meanings anyway. I'll hand the mic over to Chalmers here:

According to Kripke, there are many statements that are knowable only empirically, but which are true in all possible worlds. For example, it is an empirical discovery that Hesperus is Phosphorus, but there is no possible world in which Hesperus is not Phosphorus (or vice versa), as both Hesperus and Phosphorus are identical to the planet Venus in all possible worlds. If so, then "Hesperus" and "Phosphorus" have the same intension (one that picks out the planet Venus in all possible worlds), even though the two terms are cognitively distinct. The same goes for pairs of terms such as "water" and "H2O": it is an empirical discovery that water is H2O, but according to Kripke, both "water" and "H2O" have the same intension (picking out H2O in all possible worlds).

So Kripke's claim (as paraphrased by Chalmers) is that because we now know that they are both just Venus, Hesperus and Phosphorus both must pick out Venus in all possible worlds, and so have the same intension (same extension in all possible worlds = same intension). Yet most people would agree that "Hesperus" does not quite mean exactly the same thing as "Phosphorus". To accommodate this in our theory of semantics, the following reasoning is invoked. Because of the way our actual world turned out, Hesperus is Phosphorus is Venus, and this must hold true across all possible hypothetical worlds. But if we imagine for a moment that our actual world had turned out differently, and in our actual world Hesperus was a different object than Phosphorus, and then we let our imagination range across all possible worlds, we might come up with a different intension for each world so considered.

So essentially we set up a grid: first, along one axis (say, the vertical axis), we lay out all possible worlds, and imagine that for each of them, that is the way our actual, real world might have turned out. Then for each of those (i.e. for each horizontal row on the grid), we do the old-school possible worlds exercise, considering each possible world as hypothetical (along the second axis, the horizontal one) given that the possible world on the first axis is being considered as actual.

2D semantics is motivated by the Platonic impulse: the certainty that what something "turned out" to be in our actual world somehow fixes its meaning absolutely for all time and in all contexts. Thus, in order even to toy with the idea that things might have "turned out" differently in our world, we have to add a whole new dimension to our already infinite array of possible worlds. So instead of simply(!) considering infinite possible worlds, you consider infinite possible worlds for each possible world, with the possible world on the vertical axis imagined as the way the actual world "turned out". If possible worlds scenarios are clunky, then 2D semantics is clunkiness squared.

Does anybody imagine that when a little kid learns a new term, say "Mommy", that kid constructs a two-dimensional array in her head and fills in all the spaces in that array with the appropriate intensions and extensions of "Mommy" in all possible worlds as demanded by two-dimensional semantics? Of course not - no one thinks this. So if two-dimensional semantics is not a theory of what actual language users do when they acquire and use terms in the real world, what is it a theory of, exactly? If two-dimensional semantics is the answer, what was the question? The same question could be asked of many theories of semantics.

Turning Out

The whole point of needing a second axis (i.e. the second dimension) in 2D semantics is that in our world, renates all turned out to be cordates. Hesperus and Phosphorus both turned out to be Venus, and water turned out to be H2O. We may imagine possible worlds in which things could have "turned out" differently. This phrasing is misleading in that it draws a sharp distinction between a "superficial" acquaintance with the concept of water on one hand, and what water "turned out to be" on the other. Water has not turned out to be anything. We could still find out all kinds of things about water that would surprise us. I could be in the Matrix with a cable jacked into the back of my neck, in a "real" world in which physics is completely different, and in which there is nothing remotely resembling water. Perhaps in prescientific times, peoples' conception of water underwent revisions along the way, before people figured out about atoms and molecules.

While sometimes we discover big important things about stuff we thought we already understood pretty well, the process of turning out is unfolding all the time, and is never finished. We never resolve symbols "all the way down". A possible exception to this might be things that are defined as part of a self-contained system in which everything is circularly defined explicitly in terms of other things within the system, as in mathematics. But even then, we may still discover new truths and untruths within the system that reflect back on our original basic terms. In real life, concepts do not float free, then one day "turn out". They are always turning out; they never stop turning out.

We have a set of empirically-derived properties of water on one hand (oderlessness, transparency, etc.) and another set of empirically-derived properties on the other (inferred microphysical constitution), and these two sets of properties have always seemed to coextend in our world. When we let them float free of each other in our imagination, we have to decide for the first time which set gets to keep the tag "water", like a judge deciding which of a divorcing couple gets to keep the house. Because there are two sets of properties, we need two axes in our n-dimensional grid, hence two-dimensional semantics. There could be any number of sets of empirically-derived properties of water, however, so the number two is arbitrary. We actually would need as many axes in our infinite grid of possible worlds as we can come up with logically independent sets of empirically-derived properties.

Imagine a stone-age people who had a word, "poog" that meant, to them, "tool or weapon". As time went on, and the civilization advanced, the same term, "poog", might come to mean more specifically "pointed stick used as a weapon". Later still, it might mean "spear made of ash". Would it be right, then, to characterize the situation by saying that "poog" turned out to mean a spear made of ash, and that really had meant a spear made of ash all along? That the stone-agers who called a rock a poog turned out to be wrong? Would anything interesting be revealed about what meaning is or how it works by hypothesizing a Twin Earth in which the inhabitants used the word "poog" to refer to spears made of birch?

A pre-1750 person, say Isaac Newton, had a significantly different model of reality in his head, but he had experiences and memories similar to mine, and he fit his experiences and memories into his model. In both our cases, "water" is defined, at least in part, relationally - in terms of where it fits in the reality-model relative to lots of the model's other elements. But my concept of water has certain associations within my reality-model that Newton's did not have, associations that further constrain the concept. There are fewer possible universes that contain stuff I would agree was water than there are for Newton (assuming that I buy into the idea that water is and must be only H2O).

I prefer my model of reality to Newton's. I like the neatness, the power, the integrity, etc. of my scientific picture of the world. But in terms of what is going on when we refer, water has not "turned out to be" anything. Newton and I have different reality models, with different constraints upon how we categorize the stuff we find in the universe. Based on our different models, our concepts of water have different satisfaction criteria.

This is not oops-my-brains-just-fell-out relativism. I like science. I believe in science. Atoms are real. Newton was ignorant. But it is a strange form of scientific hubris to build Newton's ignorance of our science into a theory of reference, or to reify the distinction between "prescientific" notions of water, Hesperus, or anything else on the one hand and the way things "turned out to be", or the way they "really are", on the other, and to imagine that this alleged distinction tells us anything interesting about meaning. Just because a cathedral is made of stones, it does not follow that my concept of a cathedral is made of my concept of stones, and just because water is made of H2O, it does not follow that my concept of water is made of my concept of H2O.

Symbol Resolution

The mathematical notion of symbol evaluation is partially to blame for the bias philosophers have for this idea of "turning out". In algebra, you can have a variable, x, that everyone can see is a variable. It can be manipulated as a variable, but at some point, you may resolve it, by substituting a number, like 43, for it. There is an unambiguous, explicit delineation between the variable before it was resolved, and the value it has afterwards. There is a universally understood sense in which x is unresolved, and exactly what aspects of it obey certain mathematical rules anyway, and what aspects of it are left unspecified.

As we generate and parse natural language, things are almost never that neat. Symbol evaluation in natural language is not an either/or kind of thing, as it can be in mathematics. For most of the terms we use in daily life, there are various degrees of specificity of resolution, and we resolve terms or inhibit their resolution to the appropriate degree, and in the appropriate order according to all kinds of rules of context as we string terms together in our thoughts or utterances. Modern semantic theory posits a very sharp distinction between a term's intension and its extension. The trouble is, rigidity of designation, to use the philosophical term, is a sliding scale. Parsing and generating language is less like symbol resolution as traditionally conceived than it is like tuning a complicated musical instrument.

Early vs. Late Binding

In certain contexts in computer science, the term "binding" is used to describe symbol resolution: a variable expression is "bound" to a particular value, and thus ceases to be a variable. Furthermore, there is an idea of "early binding" and "late binding" of variable expressions. The idea is that you can have a variable, and you can resolve it right away (early binding), then feed it into other calculations, or you can let it exist as a variable in those calculations, then resolve it to a specific value at the end (late binding). Sometimes you can get very different results depending on when you do your variable bindings.

Some of the sense of this can be illustrated with the slightly awkward sentence, "By the year 2050, the president of the USA will be a woman." The likely intent here corresponds to late binding of the term "the president of the USA". We let that term float in the abstract as we evaluate the sentence, knowing that it will not be resolved until 2050. Or we could bind it early: as I write this, the president of the USA is Joe Biden, so the term "the president of the USA" resolves immediately to "Joe Biden", and the sentence then states that by the year 2050, Joe Biden will be a woman, a considerably less likely claim. Different terms seem to call for earlier or later binding, more or less specific resolution depending on context (which, of course, is made of other terms, which need to be resolved as well).

A great deal of the jargon associated with philosophy of semantics can be recast in terms of early vs. late binding. To me, this is often clearer. When Kripke speaks of fixing a reference as opposed to identifying a term with a cluster of properties, he is talking about early binding as opposed to late binding. When the Greeks coined the term "Hesperus", they bound it early (if unknowingly) to the actual thing, Venus (at least, that's what Kripke thinks). Kripke attributes to Frege and Russell the counterclaim that it is OK to bind terms late, and that the Greeks let the properties float free of any binding, so there could be a possible world in which Hesperus is something other than Venus. If the "superficial properties" are the x, and Venus is the 43, Kripke says that as soon as the Greeks said x, they immediately meant 43 even if they didn't know it. Frege and Russell, on the other hand, say that it is fine to let x stand in its own right, and we could perfectly meaningfully find out later that x is 43, or 23, or 101.

Gareth Evans' example about Julius also boils down to early vs. late binding. The idea here is that we allow the term "Julius" to refer to whoever invented the zipper (if anyone did) in whichever particular possible world we are considering. Semantic hijinks ensue from considering how, and to what extent, "Julius" refers to an actual person in any given world. Here we see that by hypothesis, "Julius" floats free of any binding (i.e. it is late-bound). "Julius" is defined by a descriptive criterion only, and is not bound to a particular individual until we touch down in a particular world, at which point the variable gets bound to the actual person who invented the zipper in that world. Once again, though, the example is somewhat contrived. It is set up to mimic mathematics rather than real life. "Julius" is a bistate term: either unbound or bound. In its unbound state, it is strangely specific about how to bind it, and there is a clear, unambiguous distinction between its bound and unbound state. It seems designed to be as close to an algebraic x as English prose can get.

Another example is one that William Lycan cites in his introductory book "Philosophy of Language" (2008): "I wish that her husband weren't her husband." In the first instance of the term "her husband", it is early bound, and picks out an actual guy, but the latter instance of "her husband", it is late bound (or rather, not bound at all within the sentence, but still waiting to be bound by the time the sentence ends). In its late bound state, the term is allowed to persist as an abstract specification, as binding criteria for some future binding to an actual person.

This distinction between early and late binding is really what motivated 2D semantics. In ordinary 1D semantics, with only a single infinite array of possible worlds to consider, you bind your terms early, according to what they mean in our actual world. This early binding corresponds to what is sometimes called a term's secondary intension. So water's secondary intension is H2O, for example. Then, once that meaning is fixed, you let your imagination range over all possible worlds, picking out the extension on those worlds (i.e. the H2O on each world).

This, at least, is how Kripke characterized it in his objection to 1D semantics that Chalmers paraphrased above. But in 2D semantics, you allow for some late binding as you consider possible worlds. In the first part of the 2D semantics exercise, when you are considering each possible world as actual, you let some more abstract version of the term float over all possible worlds, and do your binding in each imaginary possible world, then with the meaning thus fixed, let your imagination range over all possible worlds. This is sometimes called the primary intension of a term. While water's secondary intension is H2O in all possible worlds in the 1D semantics case (we bound it early, in our actual world), water's primary intension is H2O in our world, but XYZ in Putnam's Twin Earth (we bind the abstract specification - the watery stuff - to the actual extension late: after we've switched our attention to the hypothetical XYZ world, i.e. considered it as "actual").

It is assumed that there is no ambiguity in deciding what aspects of a given term should be allowed to float free across possible worlds to be bound by the contingencies of each one, and what aspects are constant across all worlds, both considered as actual and as considered as counterfactual. That is, which aspects of water are to be considered part of the abstract characterization (e.g. its odorlessness), and which aspects are the actual essence that the "superficial" properties "turn out" to be (e.g. water's microphysical constitution). It is also assumed that there is the abstract characterization (unbound) of a term, and the actual extension (bound), and none but those two completely discrete states. That is, you have the variable, the x (the watery stuff in the environment) and the value it resolves to (H2O or XYZ). There is some serious fetishization of mathematics going on here among philosophers that causes them to shoehorn reference into the binary symbol resolution model (mathism?). The collection of "superficial" properties of water (clear, odorless, liquid, etc.) is the x, and it was an unresolved variable for eons, as we humans ignorantly used the term "water" not knowing what it really was. Then our scientists figured it out, and now we know that water "turned out" to be H2O! We found the answer - x is 43!

But early and late are relative terms. Moreover, the whole notion of binding, no matter how early or late, is really the same thing as symbol resolution, and subject to the same problems. How narrowly do we construe or intend terms? How figuratively are we speaking or interpreting a term at a given moment? What aspects of a concept do we consider fair game to abstract away and what aspects do we hold constant as we do our figurative construing? In the Twin Earth thought experiment, it was taken as a given that water's "superficial properties" were to be held constant, and its microphysical constitution could be abstracted away as we considered different scenarios. But in real life, the narrowness or broadness of construal of a term, and the aspects of a concept we choose to hold constant and the aspects we feel free to abstract away, and exactly when we bind our terms to specific extensions ("resolve" a more abstract characterization of a term to a more specific extension) can vary wildly, often along a continuum, and are highly context-dependent, even within a single sentence.

Haters Gonna Hate: Some Tautologies

To illustrate this point, I'd like to close out with a few tautologies. A tautology is an expression of the form x = x. Since x is always equal to x, regardless of what x actually is, tautologies (in theory) convey no information about x or anything else. A fancy way philosophers have of saying this is that tautologies have no "semantic content", and thus (in theory) have no meaning. But as with so many aspects of language, theory and reality do not always line up. Let me indulge here in a bit of fiction.

Jimmy and Frankie grew up together in the same working class neighborhood. In their pre-teens they stole hubcaps together, then later whole cars. Soon enough they hooked up with the mob and worked together. Some years go by, and their bosses become aware that Jimmy is skimming a little off the top each month. As a test of loyalty, they send Frankie after him. Frankie has no trouble cornering his old friend, and in the ensuing confrontation, Jimmy pleads, "Frankie, its me, Jimmy. I've always been there for you, Frankie, more times than I can count. This can't be the end, Frankie. Not like this. I know I screwed up, I screwed up bad. And you know I'll make it up, Frankie, you know I will. Come on, Frankie, please!" Frankie says nothing for a moment, just looks at Jimmy with his expressionless unblinking eyes. Then he quietly says, "Business is business, Jimmy."

Or how about this conversation:

"Every time I think about the holocaust, it shocks me all over again. You'd think that after hearing and reading about it all these years, I'd be jaded, or numbed, but no. I still can't get my head around the enormity of it, the reality of it."

"Hey, what happened, happened."

"What do you mean? It wasn't just something that happened. Real people did it! A government staffed by human beings coolly presided over the deaths of millions!"

"People are people."

"How can you say that? Killing six million Jews is not normal human behavior!"

"Well, you know, Jews are Jews after all."

"You jerk! What kind of a Nazi are you, anyway?!"

Then there is always the trendy "It is what it is." Along the same lines, there is the saying that by the time you are thirty, you must accept that no one is your mother, not even your mother; or the oft-ignored advice to a king, "If you want things to stay the same, sire, things are going to have to change"; or Alfred North Whitehead's famous slogan of process philosophy, "Things aren't things" (Whitehead never said this. I just made it up. Sorry.)

For poor Jimmy, the supposedly information-free tautology is literally a matter of life and death. The point here is that these are not particularly special cases. People talk like this all the time. They convey lots of information in ways that a logician would say is impossible. The uses of the terms in these tautologies are perfectly valid, and must be accounted for by any theory of meaning. In these tautologies, the same term is interpreted narrowly or broadly, bound earlier or later, considered abstractly or specifically in different ways and to different degrees depending on its use in different places within the same sentence. The meanings of the terms in question are determined on a case by case basis, on sliding scales. Dictionaries seduce us into thinking that there is a discrete number of meanings any term can take on. To be sure, there are some stakes in the ground, but between these stakes there is often a continuum of meaning, and people slide up and down that continuum so effortlessly that they almost do not notice it.

In modern usage, the word "quick" means fast. When Shakespeare referred to the quick and the dead, he meant "alive". It may well be that in Elizabethan times, that was a common sense of the word "quick", one that has fallen out of favor. But to our ears, it is a poetic turn of phrase, a case of Shakespeare speaking figuratively. This figurative sense of the word "quick" plays off of its more restricted sense, and makes sense to us. It is just a broadening of the term. How broadly or narrowly we use terms is in constant flux, and highly context dependent. There is no distinct line we cross when we use a term to mean one thing, but take liberties with its breadth, and when we use a different sense of the term.

The other day on the highway I saw a flatbed truck carrying an enormous underground water tank. Obviously the tank was not underground, yet you probably never thought of the term "underground" as referring to a type before. You probably always thought that it must mean literally, under the ground. We very often, perhaps almost all the time, do not speak literally. Am I speaking figuratively, metaphorically, then, when I mention an underground oil tank when it wasn't underground at all? Well kind of, I guess, but no, not exactly.

Most people are perfectly comfortable using a term figuratively in one breath, and literally in the next, to varying degrees depending on all kinds of variables. Ambiguity lurks everywhere. Determined and ingenious people can tie themselves into knots, finding ambiguity just about anywhere they look hard enough. No one seems to have a problem with this except philosophers, a fact that does not speak well of philosophers.


Future Directions

I would like to explore the distinction or lack thereof between qualitative subjective consciousness and cognition. A great deal of thought in the 20th century was devoted to questions such as: What is knowledge? What is meaning? What is a symbol? What does it mean to refer? and a whole bunch of other language and cognition-related questions. I think we should ask these questions again, but this time in a way that takes the Hard Problem seriously, and does not shy away from the possibility that the answers may depend on, or be given only in terms of, qualitative consciousness. We must abandon the archaic Platonism that lives on in epistemology.

I believe that if we think hard enough, we will find that there is no way to speak precisely about such notions as represent, know, believe, true, false, meaning, information etc. outside of the context of minds. These concepts just will not make any sense until we have solved the problem of consciousness. Specifically, we can not speak of a computer or any of its parts as representing anything, unless we are speaking loosely, metaphorically, anthropomorphizing the computer.

This is here epistemology and ontology meet head-on. What is the actual stuff out there in the universe that constitutes what we know and how we know it? The facts we know, the beliefs we hold, and the cognition we instantiate when we think, are qualitative, and as such we have no idea what they are made of and how that stuff works. We have to figure out how the qualitative aspect plays into the information processing aspect of our cognition, and we have to figure out how we can get qualia to stop being amorphous blobs of seeing red and feeling pain, and start to stack like Lego blocks.

And we will have to entertain some wacky metaphysics. Some form of panpsychism must be true. I suspect that the nature of time is involved somehow. Hard science is a bit agnostic about what time is and how it works, and I believe we have first-person evidence that time and phenomenal consciousness play together pretty closely. As Horgan and Tienson (2002) put it, experience is not of instants; experience is temporally thick.

I'd also like to think a bit about will and perception and the relation between the two - the phenomenon of attention will be key here. And let's not forget memory, a much more central mystery than it is generally given credit for. In short, ultimately I'd like to explore the relations (perhaps in some cases being an identity relation) between the following:


What Bullet Have We Bitten, Exactly?

We'll get to that, but first I want to go through a brief summary, laying out some of the main points I've made, paying special attention to the bullets with toothmarks. I'm being blunt here, and I'm not trying to sound reasonable. There is no supporting argument, no sugar coating. It's going to sound a bit kooky. Hell, I'd think it sounds kooky.

Basic Metaphysics

Taking Qualia Seriously

I am a qualophile in the mold of David Chalmers: I take qualia seriously in exactly the way Daniel Dennett says we should not. I am not, however, a dualist. I think that there is only one kind of stuff in the universe, but physics, as currently practiced, is incapable - even in principle - of describing that stuff completely. There are reasons for thinking this that have nothing to do with consciousness or qualia. All of this makes me a panpsychist, (or, if you prefer, a neutral monist) something like Bertrand Russell. There is something qualitative that stands as part of the fundamental furniture of the universe, along with mass, charge, and spin. This qualitative essence is what instantiates or manifests the extrinsic, functional behaviors that our laws of physics describe so well.

I do not think that positing a causally efficacious conscious basis of physical reality means we have to violate known physical laws. Quantum mechanics already tells us that at the lowest levels, we can't know how things behave. We can only characterize their behavior in aggregate over time. Equivalently, for a single experiment we can only give a probability. We already live in a non-deterministic universe. This, I think, gives us wiggle room to allow the basic stuff of the universe to do what it wants, within constraints. Stuart Hameroff has speculated that some kind of quantum superposition is maintained inside the tubulin microtubules in the neurons in our brains. This may or may not be true, but I am committed to the speculation that at some point, brain scientists will find some crucial mechanism that depends on some kind of "indeterminate" quantum effect ("wonder tissue", in Dennett's derisive terminology, or "pixie dust in the synapses" according to Patricia Churchland.)

This, then, is the main bullet I want to bite, the one that even most of my fellow panpsychists are afraid to openly gnaw on. I want to make it clear that if you believe in causally efficacious qualia (as I do), then you must bite the bullet of violating the apparent causal closure of the physical universe, or claiming that your qualia can push physical stuff around without violating known laws. That said, we might not have to bite a big bullet. The brain could be a pretty chaotic system, in which a tiny nudge at the right place and time could have large scale effects which play out according to classical rules.

Moreover, placing consciousness down at the quark level does not help explain human scale consciousness if the only way to scale up is with extrinsic causal dynamics. The billiard balls may be conscious, but if the only way they interact or scale is the same bonking they would do it they weren't conscious, the bonking alone can't explain the redness of red. No, consciousness is at the bottom layers of reality, and it scales as such to exist and do macroscopic stuff that the causal bonking would not do on its own. This does not mean that the Standard Model or Core Theory of physics is wrong, just that it leaves some details out. Brains are special systems that have evolved to exploit these details.

Holism

In the same way that I think that we should take qualia seriously, and believe that the fact of qualia has metaphysical implications, I believe that the unity of a percept is deeply strange to our usual way of thinking about how the universe is put together. My consciousness, as I experience it, must be a Fundamental Thing, and not just made of smaller Fundamental Things. Once again, quantum mechanics probably comes into play, since it allows for what William Seager calls large simples: things with potentially complicated behavior that are inherent wholes, and not merely aggregates of smaller things. I'm talking about things like entangled states, mixtures, and Bose-Einstein condensates here.

Each of our unitary, qualitative thoughts and percepts must be manifested physically as something objectively unitary itself, and that thing has causal latitude. Specifically, large simples' behavior does not supervene on that of their parts, since they don't have any. They may not get to violate existing physical laws, but they may have more elbow room to act than something that was a mere aggregate of smaller things. This solves panpsychism's oft-cited combination problem by fiat, as it were. Each large simple is ontologically unique, which leaves us with a pretty extravagant picture of the universe, but c'est la vie. We prefer parsimony in our laws of nature, but nature does not owe us anything in this regard. The promise of this kind of large-scale unity is, perhaps, more important to me than the more often cited indeterminacy of quantum mechanics.

Time

Like qualia and the unity of our percepts, our direct perception of time also tells us something important. The sense of temporal duration is perhaps more compelling when considering auditory percepts than visual ones. Consider the experience of even a short piece of music, for example. Frankly, I'm not sure how this plays into the larger picture, but there is some funky way in which consciousness is smeared out over time to various extents for various subjects, like William James' saddleback "specious present".

This does not mean that there is any such thing as backward causation (what could that term even mean, really?), or that somehow consciousness can see the future or reach into the past. Nevertheless, there is some way in which consciousness (or moments of consciousness) can span time, and I wonder what the limits of this span are. The notion that there is a durationless point called "the present" is an abstraction foisted upon us by calculus, among other disciplines. In real life, time does not come in points or infinitesimal slices.

Structure and Relation, Phenomenologized

Once the reductionist has broken the world down, he has a hard time putting it back together again, as the saying goes. In a universe made of almost unimaginably blind, stupid, amnesiac tiny billiard balls bonking this way and that, in which there are no efficacious levels but the very bottom-most one, things like "structure" and "relation" are only ideas in our heads. Similarly with notions like "algorithm", or "if…then…". Unlike a Universal Turing Machine, we can step outside an algorithm, and see it from above, as it were. We see algorithms, processes, and sequences all-at-once, as a thing. We don't have to execute the code to think about it, and comprehend it. This intrigues me. I think it ties in with the metaphysics of time, and the way we can have a unitary percept that spans time. Minds are strangely good at turning processes into things.

Sensory qualia (the redness of red, the taste of salt) are just the tip of the iceberg. Everything we are aware of in our minds is qualitative, and just as mysterious as the redness of red. All of the "cognition", even the driest, most factual knowledge, is made of the same stuff in our minds as the redness of red. Our minds are not cognitive machines painted with a qualitative layer, nor are they cognitive machines bolted onto a qualitative base. They are qualitative through and through. We should not be lead astray by the fact that we have invented machines that are "purely cognitive" that seem to emulate some of the functions of minds. It is a mystery that salt seems salty to us, but it is no less of a mystery that anything at all seems like anything to us. Reductive materialists fail to appreciate just how little comprehension you can build with those billiard balls, even when you have a lot of them.

While what-it-is-like-to-see-red is fine as the gateway-drug quale, it is misleading to use it as the paradigmatic quale going forward. Qualia are ineffable, but it is a mistake to think they are unstructured. They can be quite complex and structured. What is it like to see a square? To prove a theorem? My seeing a power plant by a river on an overcast day is a quale. 2 + 2 = 4 is a quale. All thought is qualitative. Our cognition and our phenomenal consciousness are made of the same stuff, two sides of the same coin. Structure and relation, in our minds, are themselves qualia as much as the redness of red. The easy problems are hard too.

My Favorite Model: Pandemonium

I keep coming back to some variation of this basic idea. Our minds are hives, or Darwinian memescapes, populated by what Daniel Dennett calls demons. As William James said, the thoughts are the thinkers. There is no sharp line you can draw between CPU and memory. We don't apply thoughts, they apply themselves. These demons are not just memories, although they can be that too. They do things, they are active, and whatever they do, whether they compete or cooperate, a lot of them are active at the same time. As Dennett says, what we take as our linear, computer-like mind is really something of a simulation, implemented on a massively parallel substrate.

Individual demons are punished for overactivation, most likely by simply getting tuned out by the other demons. There is a risk/reward trade-off as they decide if, when, how assertively, and how specifically they self-deploy. Unlike the pandemonium model as Dennett describes it, however, I suspect that the demons are qualitative. There is a what-it-is-like for all of them, but whatever it is that we think of as ourselves is not necessarily patched into each of them. We each contain multitudes. The unified, continuous self, as we normally think of ourselves as being, is a useful fiction, a sort of virtual avatar, a me‑model at the center of my world‑model. Each demon may be considered a subject in terms of its being smeared out over a specious present, a moment of time.

As the demons do their work, they engage in lots of feedback loops, a lot of iteration, on the way to forming anything we might describe as a stable thought or percept. As percepts are built up, at the same time they are being broken down then built up again with the pieces. Thoughts form in our heads, with this riot of demons trying this, then that, before settling on some kind of stable percept or concept. I suspect that quantum superposition is involved somehow, allowing for exploring a combinatorially explosive web of potential paths.

This memescape/ecosystem Darwinian analogy has limits and leaves a bunch of questions unanswered. I don't know how demons cooperate or coalesce. Do they form some kind of union, then stick together from then on, or do they separate, but maintain some tendril of connection? Or do they reproduce, giving rise to a whole new demon, who then may maintain connections with its parents? Do demons really persist over time, or do they constantly regenerate themselves? Do they at least partially define themselves as deltas from other demons, or coalitions of demons? In general, we need to nail down the individuation criteria for demons. I also need to explain more about the qualitative nature of the demons, how the qualia (not just the redness of red, but the perception of process, the perception of parts and wholes simultaneously, and all the rest of it) play into the more purely cognitive pandemonium model. Somehow the demons, and what they do, are their phenomenology.

Special Points of Emphasis

Panpsychism is far from a majority position (not to mention the speculations about quantum mechanics). Even putting all that aside, however, there are some things that either don't generally get the emphasis they deserve, or that most people don't think about at all, or think about differently than I do. I think they all have a part to play in the final picture.


References

Block, N. (1995). 'The Mind as the Software of the Brain', in D. Osherson, L. Gleitman, S. Kosslyn, E. Smith and S. Sternberg (eds.) An Invitation to Cognitive Science, MIT Press

Chalmers, D. J. (1996). The Conscious Mind, Oxford University Press

Chalmers, D. J. 'Two-Dimensional Semantics'

Cisek, P. (2009). 'Reclaiming Cognition' Journal of Consciousness Studies, Nov/Dec 2009

Dainton, B. (2000). Stream of Consciousness, Routledge

Dennett, D. (1991). Consciousness Explained, Little, Brown

Horgan, T., and Tienson, J. (2002). 'The Intentionality of Phenomenology and the Phenomenology of Intentionality', in D. Chalmers, ed., Philosophy of Mind, Classical and Contemporary Readings, Oxford University Press

Jackson, F. (1986). 'What Mary Didn't Know', Journal of Philosophy, 83: 291-95

James, W. (1952). The Principles of Psychology, Encyclopedia Britanica Inc.

Kelly, S. (2003). "Time And Experience" (lecture, MIT, 9 May 2003)

Kripke, S. (1972). Naming and Necessity, Harvard University Press

Lewis, C. S. (1955). Surprised By Joy, Harcourt

Lycan, W. (2008). Philosophy of Language, Routlegde

Minsky, M. (1985). The Society of Mind, Simon and Schuster

Nagel, T. (1974). 'What Is It Like To Be A Bat?', Philosophical Review, 83:435-50

Nørretranders, T. (1998). The User Illusion: Cutting Consciousness Down To Size, Viking

O'Hara, K., and Scutt, Y. (1996). 'There Is No Hard Problem of Consciousness', Journal of Consciousness Studies, 3 (4), pp. 290-302.

Penrose, R. (1989). The Emperor's New Mind, Oxford University Press

Price, H. (1997), Time's Arrow & Archimedes' Point: New Directions for the Physics of Time, Oxford University Press

Putnam, H. (1975). 'The Meaning of "Meaning"' in K. Gunderson, ed., Language, Mind, and Knowledge, University of Minnesota Press

Ramachandran, V. S., and Rogers-Ramachandran, Diane (2009). 'Two Eyes, Two Views', Scientific American Mind, Sept-Oct 2009, pp. 22-24.

Rosenberg, G. (1998). On the Intrinsic Nature of the Physical, (presented at Tucson III: Toward a Science of Consciousness, Tucson Arizona, 29 April, 1998)

Rosenberg, G. (2004). A Place For Consciousness: Probing the Deep Structure of the Natural World, Oxford University Press

Russell B. (1954). The Analysis of Matter, Dover Publications

Seager, W. (1995). 'Consciousness, Information and Panpsychism', Journal of Consciousness Studies, 2 (3), pp. 272-288.

Siewert, C. (2011). 'Phenomenal Thought' in T. Bayne & M. Montague, eds., Cognitive Phenomenology, Oxford

Shannon, C. (1948). 'A Mathematical Theory of Communication', The Bell System Technical Journal, 27, pp. 379-423, 623-656.

Silberstein, M. (2001). 'Converging On Emergence', Journal of Consciousness Studies, 8 (9-10), pp. 61-98.

Strawson, G. (1997). 'The Self', Journal of Consciousness Studies, 4 (4-5), pp. 405-428.

Strawson, G. (2009). Selves, Oxford University Press

Thompson, D. (1990). 'The Phenomenology of Internal Time-Consciousness'

Williams, D. C. (1951). 'The Myth of Passage' Journal of Philosophy, 48, pp. 457-472. Reprinted in R. Gale (ed.) The Philosophy of Time 1967, pp. 98-116.


About The Author

I was born in 1965 and grew up in New England. I now live in a suburb north of Boston, Massachusetts, in the USA. I worked for decades as a computer programmer, and once wrote a surprisingly accessible and entertaining book about Boolean algebra, the logic used inside computer chips.

I have a bachelor's degree in Computer Science, and no further degree. As an undergraduate, I became interested in artificial intelligence. Like many student programmers, with the hubris often found in undergraduates, I thought that I should be able to program a computer to think. I no longer think that I will ever write such a program. Now I'd settle for a good essay about how intelligence works or, barring that, an essay about exactly why we will never know. Some time ago I decided that the problem of consciousness is a deeper and more interesting problem than that of functionally realizing artificial intelligence. Further, I now suspect that we will have to understand consciousness before we have a realistic shot at AI in the first place.

This book started as a series of essays that I wrote over a long time, and I have attempted to knit them together into a cohesive single work. I have chosen to compose it in raw HTML and make it available online to anyone with a browser, so I guess that makes it a self-published e-book.