Not only do I not have a knock-down theory of consciousness, I don't even have a clear definition of it. Different people mean different things by the term. In any kind of philosophical inquiry, you can always cheat by defining your problem away. For instance, I could define consciousness to be the property of being a piece of gorgonzola cheese. No mystery, no problem. But this would be silly. As we inquire into some aspect of the world or ourselves, on one hand we must respect the sense in which people commonly use the words they use to talk about these aspects. On the other hand, we have some latitude to define the terms we use in such a way that they get to the heart of what is mysterious or interesting about whatever it is we are asking about. I subscribe to a characterization of consciousness that I believe gets to the meat of the matter. In my opinion, the meat of the matter is the Hard Problem.
In his book, The Conscious Mind, (1996) David Chalmers popularized the distinction between the "easy problems" of cognition (ability to reason, remember, evaluate, report on internal states, etc.) which might be understood in the next century or two, and the "hard problem" of subjective consciousness. The hard problem is hard because it just does not seem amenable to the sort of analysis that modern science knows how to do.
The hard problem refers to the fact that you will never be able to tell me a story about information processing, or about biochemistry, or about anything based on physics as currently construed, which will come close to explaining why green looks green to me, or why middle C sounds like middle C. These basic ineffable sensations are called qualia (singular quale) in the literature of philosophy of mind. Subjective consciousness itself is sometimes characterized at the most basic what-it-is-like to be you or to have some sensation or another.
We are taught that the entire universe and everything in it is made up of atoms and molecules and photons and things like that, all interacting according to the laws of physics. The claim of the Hard Problem as I understand it is that a) the redness of red as it appears to me is an absolute, objective1 fact of the universe, and b) that no account of atoms and molecules interacting, no matter the complexity of their interactions, will predict or explain the redness of red as it appears to me. So in the redness of red we are faced with an absolute, true fact of the universe, but one that no theory of physics, or information, or computation will ever be able to explain. That consciousness in this sense is real, and that it is utterly unexplainable in any terms familiar to science, is at its heart an intuition, and one not everyone shares.
Clashes of intuition at this level tend, as Chalmers says, to degenerate into table pounding on both sides. Nevertheless, people have come up with clever thought experiments to help sceptics arrive at the intuition that the hard problem exists. One of the most famous, and for me one of the most compelling, is that invented by Frank Jackson (1986) in his essay, "What Mary Didn't Know".
Imagine Mary, a supergenius particle physicist/neuroscientist, in a future world in which our understanding of physics is complete and perfect. She understands and has mapped out every single neural pathway, electro-chemical reaction and quantum wiggle in her own brain. But Mary has been raised in an entirely black and white environment. She has never seen anything red, for instance. She knows exactly what the physics of photons of red light are, and she can predict exactly how she would react behaviorily if she did see something red, but she has never actually experienced it directly. If you have ever debugged a computer program in C, for example, using a debugger, in which you single-step through your code line by line, you may get a sense of the way in which Mary understands her own predicted reaction to seeing a red apple. She can "walk through the code" perfectly, but she has never experienced red.
Now imagine that Mary gets let out of her black and white room. She sees a red apple. For all her abstract knowledge, perfect and complete as it was, something entirely new happens in her head when she sees that apple. A lot of people argue about whether this new experience constitutes new knowledge or a new ability, but this is just talking about words, i.e. how do you define the words "knowledge" and "ability", and is much less interesting than whatever is happening to Mary.
The point here is that if you think of the brain as a big information processor, even being as generous as your wildest dreams will let you in terms of its sheer processing capacity, future physics, etc. you still leave something out. The information processor does not see red. It counts pixel values on its visual grid, it accesses memory locations, it does data smoothing and runs comparisons, but it does not have subjective experience. Perhaps when thought of in a certain way, from the point of view of a certain level of abstraction (projected onto the system by the observer), the information processor may be seen as seeing red, but there is no reason to believe - none in the world - that it really is seeing red, objectively, the way I (and presumably you) do.
Another illustrative example comes from Thomas Nagel's (1974) essay, "What Is It Like To Be A Bat?". Bats employ a sonar-like echolocation trick to locate bugs in the air. The claim is that there is nothing you could possibly ever know about how a bat's brain, ears, and vocal system works that would let you know what it is like to sense a moth 20 feet away; kind of like hearing, but not really, kind of like touching with a long arm, but not really. Similarly, I have read that bees see colors that we can not see. What do those colors look like? We could know everything about bee brains and bee eyes, how the bees react to those colors and why, how the ability to see those extra colors evolved, etc. and we would still never know personally what those colors look like. If all mental activity is information processing, how is it that we could have all the explicit, articulatable information about bee perception but still not know something about it? Couldn't we, with our far superior brains, crunch through the bee color perception algorithm? Couldn't we "walk through the code"? Most people would agree that such an exercise would not deliver a sense of what bee colors actually looked like to the bee.
The arguments about the inability of information processing or physical theories to explain subjective consciousness apply to the human brain itself. Just as the silicon, flipping bits, will never see red, we have no principled reason to derive the fact of our seeing red from the bit flipping in our own neurons.
This point is illustrated by another thought experiment, that of the notion of a zombie. A zombie, in this context, is basically a person who has no phenomenal consciousness, that is, who experiences no qualia, but whose brain and cognitive machinery otherwise works just fine. A zombie has the same neural connections that you do, acts and talks like a normal person, but is "blank inside". A zombie brain essentially is a human brain, but considered only as an information processor. Note that a zombie would claim to see red, and seem to fall in love, and would in fact do all the things with its brain that we do with ours, producing all the same reactions, except that it would not be like anything to be the zombie.
The zombie thought experiment is extremely controversial. There are some people who think that the whole notion of zombies is incoherent. If something talks, thinks (if by "thinking" we mean only the sort of processing that could be modelled on a computer, the pure information processing manifested in us by our neural firings), and acts like a conscious person, then that entity is conscious, full stop. To speculate about the conceivability of something that talks, thinks (in the limited way mentioned above) and acts like a person but is not conscious is like speculating on the conceivability of married bachelors. There is nothing extra about consciousness besides the functional mechanisms of information processing, and any claims to the contrary are just spooky mumbo-jumbo, the products of sloppy thinking. To them, it is as if I hypothesized an atom-for-atom copy of a water fountain, one that behaved exactly like the original water fountain, but just wasn't, you know, a water fountain.
I find the thought experiment compelling, in that I find zombies logically conceivable. Given our current understanding of brains, it makes sense to speak of a brain that worked exactly as mine does now, producing the same output responses to the same input stimuli, and employing the same neural mechanisms, but which skipped the phenomenal conscious part. The zombie thought experiment is intended to stimulate the same intuition that the Mary experiment does: we do not have, within current science, any principled, theoretical way (other than brute correlation) to get from a complete description of how the parts of the brain function to the fact of subjective consciousness and the existence of qualia. A failure of prediction of this sort is a sign that your science is incomplete at best, and quite possibly seriously flawed. With regard to the Hard Problem, this failure of entailment from the facts about brain processing to the facts about consciousness has been called the explanatory gap.
While it is often hard to draw a distinct line between qualia and cognitive, functional information processing (a fact I believe is underexplored), there is something going on when I see red that is in principle unexplainable by any theory of mentation that allows for minds being implemented by computers. The redness of red as I experience it is real, and can not be inferred from information processing alone. It stands as an extra fact about the universe that demands explanation. To define consciousness as the functional information processing is to define away the real mystery of consciousness, to sweep it under the carpet.
Frankly, I bet that zombies, in the strictest sense, are impossible. My hunch is that if you could copy me, molecule for molecule, what you would wind up with would be conscious, but for reasons that aren't even approached by our present-day science. Thus it is an indictment of current science that zombies are consistent with everything we know, even though they may someday turn out to be impossible in practice.
What if every time I turned on my kitchen light switch, the neighbor's dog barked. Let us say that I tried this a hundred times, and each time, the dog barked, even when I got up in the middle of the night and turned on the light. Imagine further that I hired en electrician to follow the wiring, and he found nothing out of place. Let's say I went over to the neighbor's house and examined the spot where the dog was tied up, and even put the dog's leash and collar on myself and lay down in the dog's spot and had my sister-in-law turn on the light and felt no effect - except that the dog still barked, uncollared, standing next to me. In this situation, I would wonder what the connection was, and I would look for answers.
It would be an inadequate explanation of this mystery to claim that the dog barking just is the kitchen light switch being turned on. In response to this inadequate explanation, I could propose a zombie version of the scenario: I could ask my opponent to imagine a world in which I turned on the kitchen light switch, and the dog didn't bark, and only the kitchen light turned on. Unless you accept the original inadequate explanation that the dog barking just is the kitchen light switch being turned on at face value, there is nothing inconceivable or logically or metaphysically wrong with this zombie scenario.
One temptation might be to brush away qualia as "psychological" effects. This term carries with it the implication that the phenomena in question are relatively complex, high-level effects, and thus amenable to analysis, as my feelings about my mother might be amenable to Freudian analysis. Maybe qualia are as they seem, simple and basic, and maybe they are complex things that only seem simple and basic to the conscious experiencer. Either way, tarring them with the psychological brush doesn't make them go away, or provide any clues as to how they, or consciousness, arise.
Another possible response to the problem of consciousness might be, "who cares?" If my zombie twin or a suitably programmed computer could write poetry that stirred the soul, or compose operas, or carry on lively cocktail party chatter as well as anyone else could; if, in fact, there were no externally observable differences between my zombie twin and me, why not just use Occam's razor and forget the whole consciousness business? For all practical purposes, the universe runs quite well without any mention of it. This, however, is an intellectual abdication - the stick your head in the sand approach.
Science does not progress by sweeping things under the rug which do not fit conveniently into the established order. In fact, in any scientific era, the science of the day seems complete and perfect, except for one or two minor anomalies. It is these little anomalies that end up bringing down the entire edifice. Further, every time there is a true scientific revolution, not only are the existing theories overturned in favor of new ones, but inevitably the old methods and criteria for what constitutes a good theory are revised as well, often radically. People who resist the Hard Problem because it has no meaning within the bounds of third person, objective scientific exploration are making a dogma of their methodology. They are generals fighting the last war.
I do not usually put a lot of stock in sociology of science, nor do I like to emphasize the cultural aspects of scientific endeavor, but what science is, its proper aims and methods, is a lot less monolithic than most people believe. We must be open minded as we consider the kinds of methods we might have to use to explore whatever facts about the world Nature sees fit to present us with. Each scientific revolution leaves us perfectly able to ask those questions that have just been answered. The fact that we don't know how to properly frame certain questions now within our science is not an argument that the questions themselves are wrong - quite the contrary. It is the questions that we aren't sure even how to ask that should interest us the most. To turn away from such questions is cowardly and lazy.
People who do not accept the Hard Problem (for now I'll call them reductive physicalists) generally like to characterize the belief that consciousness can not be reductively explained within present-day science as mystical mushy-headed wishful thinking. Sometimes they sneer openly ("Away, into the dust-bin of History!"), other times they are more polite (and patronizing: "Come on in - the water's fine! Don't be afraid to give up your quaint superstitions and your foolish vanities"). But nearly all of them at some point or another in their writings betray a certainty that anyone who believes that there is something deeply mysterious about consciousness is McCoy to their Spock: irrational, scared and desperate to hold onto the transcendent specialness of human beings, logic and science be damned. The reductive physicalists want to be the Grinch, standing on the side of Mt. Crumpit, with an ear cocked toward Whoville, hearing the Whos cry, "Boo hoo hoo, he stole our souls!".
There are, of course, people who really do want to cling to the belief in their souls at any cost to reason. But to imagine that all people who accept the Hard Problem are motivated by this desire is to indulge in kicking a straw man around, and an invitation to complacency and dogmatism. As an undergraduate atheist computer programmer, I was a die-hard reductive physicalist. I wanted nothing more than to prove once and for all that minds really were just computers, and let humanity put that in its collective pipe and smoke it. I wanted to be the Grinch. I actually wondered (with some satisfaction) what sort of spin the Catholic church would try to put on an example of true artificial intelligence. Would some people get depressed? Would some commit suicide? Or would people, by and large, be mature enough to take it in stride and think it was a fascinating advance? Ultimately I was dragged kicking and screaming to the view that the mind can not be reduced to mere information processing2.
Sometimes reductive physicalists compare belief that the Hard Problem is hard to vitalism of centuries past. This was the belief that there was some mysterious elan vital, a life force that animated living things beyond the mere mechanisms of locomotion, eating, reproduction, etc. The more we found out about how life worked at a molecular level, however, the less anyone believed in an elan vital. Belief in vitalism was ultimately exposed as a failure to appreciate how beautifully complex the mechanisms of life were. Once one understood the mechanisms, however, there was nothing left to explain. Similarly, argue the reductive physicalists, once we understand enough of the cognitive mechanisms of the brain, the Hard Problem will melt away into the details.
The problem is that subjective consciousness (or qualia) is not something we drag into the picture to explain something or other that we observe, as elan vital was invoked to explain what we observe about life, or to use another example reductive physicalists like, as the luminiferous ether was invoked to explain light waves in the 19th century. Consciousness is the raw data, the observed thing that needs explaining. It is the light, not the ether.
Some people argue that what I call subjective consciousness is some kind of illusion. As attempts to dismiss consciousness go, this one does not stand up to much scrutiny. What is an illusion? It is something that seems one way but is really another. My claims rest on the observation that that red really seems red to me. The counter claim that this is an illusion boils down to, "red doesn't really seem red, it only seems that it seems red." But seeming, like multiplying by 1, is idempotent - inserting more "seeming" clauses into my claim does not change it one bit. Whether red seems red, or seems that it seems that it seems that it seems … red, the Hard Problem stands before us. The Hard Problem consists of the fact that anything seems like anything at all. If subjective consciousness is an illusion, then who or what exactly is the victim of that illusion, and how can it be such a victim without the Hard Problem being a problem for it? There is a fundamental bootstrapping problem. There simply is no basis for anything to seem like anything to anything, or anything with which to build any seeming, in a world made of utterly blind, stupid, amnesiac particles.
It is sometimes said that taking the Hard Problem seriously represents a failure of imagination: the fact that I could not imagine traditional science (neurobiology, information theory, physics) explaining what it is like to see red says a lot more about my powers of imagination than it does about the actual limitations of traditional science. In the same way, it is argued, a vitalist's inability to imagine life being nothing more than molecular processes simply proved to be a failure on the vitalist's part to appreciate just how complex and tiny the molecular processes are. The vitalist's scepticism, however, ultimately came down to a matter of scale and complexity - the vitalists did not properly appreciate that the components of life could be quite that small or that complex. Claiming that more scale and complexity will turn ones and zeros (or their effective equivalents) into red simply makes no sense.
The fundamental components of the world to a reductive physicalist are completely blind to one another, and completely stupid, and have no memory whatsoever. They are basic particles, and they just careen in one direction, then another. Even when they attract, repel, or collide with each other, they don't really "see" or "know" about each other - they just careen. They don't know why, or what it is that is influencing them to careen in this particular direction at this particular speed. It sounds funny even to say it this way, but I think some people do not really sense in their guts just how blind, just how stupid, just how little memory the fundamental particles must have to a committed reductive physicalist. To get anything not blind and not stupid out of them, you must attribute a lot of power to the notion of "levels of organization". You can't get that particular blood from that particular stone, however. The blind and stupid stay blind and stupid, and utterly oblivious to any "levels of organization" no matter how many you put in a room or how they are arranged.
It strikes me, in fact, that the reductive physicalist claim is an extravagant and unsupported one, a point which is often overlooked simply because reductionistic physicalism has been the reigning orthodoxy for several centuries now. The reductive physicalists claim that if you get enough unconscious stuff together in a big pile, and arrange the pile in a certain special way (a complex enough way, perhaps, or a pile that conforms to a certain functional schematic, or maybe Druidic runes), then poof! subjective consciousness will appear. They claim that this must be the case, because centuries of scientific advances have shown us that the reductive physicalist approach is the perfect framework for understanding the universe, so it simply must be the case that it is adequate to explain consciousness too. Although they (ahem) can't give us the exact details just now. I find this alchemical hypothesis at least as bizarre, spooky and mystical as anything I've ever heard. It is a wild leap of faith on their part, and the onus is on them to show us the money. It is foul play to try to shift the burden of proof back on their critics, claiming (O ye of little faith!) that scepticism of the reductive physicalist position betrays some kind of shameful failure of imagination.
Moreover, it is not a failure of imagination that leads me to take the Hard Problem seriously. On the contrary, it is because I can imagine a day not too far off (fifty years? One hundred?) on which we solve Chalmers's easy problems. On that day, cognitive science and neurobiology complete their intended programs and actually map every single event in the human brain, every information flow at any level of organization you please, every secretion of every neurotransmitter. On this day, it will be possible for us (like Mary in her black and white room) to detail everything that happens between photons striking my retina and my uttering, "What a beautiful sunset!". The cognitive scientists and neurobiologists will collect their Nobel prizes and go home satisfied, and nothing in their description of the brain will give the slightest hint of what it is like to see red, or why anything seems like anything at all. Yes, it is true that I can not imagine that day in detail, in the sense that I do not have that final theory at my finger tips down to the last synapse (otherwise I would be the one collecting the Nobel prize right now), and there's the rub, the reductive physicalists would say. If I could see that theory in detail, they would argue, it would be clear why red seems like red.
For nearly a century, mentioning consciousness was a career killer in the field of academic philosophy. In the last generation or so, however, the question of consciousness has been coming up with greater and greater urgency, and it is attracting pretty level-headed, math/science type people, not mystics, not new-agers, not religious wishful thinkers. I think this is so precisely for the reasons that I mentioned above: as science progresses, and closes in on its stated goals regarding our brains, its limitations stand out in ever sharper relief. The physical sciences, as their boundaries of inquiry are currently construed, deal only in functional behavior, externally measurable effects. There are perfectly valid questions about Nature (what is it like to see red?) that are completely outside the bounds of natural science as currently practiced. That is, it is conceivable that we could have a complete and perfect understanding of physics and all the other "hard" sciences, and have never framed, let alone answered, those questions. My ability to imagine this state of affairs may be incorrect in some way, but it certainly does not represent a failure of imagination on my part.
Physics and physicalism are not so much wrong (except in their claims of exclusivity) as they are incomplete. This is just the way science works. Newton invented a formal basis for a physics and for a long time it seemed dead accurate. But along comes Einstein, and it turns out that while Newton's physics was perfectly consistent and accurate within its domain, it was incomplete - it turns out to be merely a special case of a more general set of laws. Then a decade later, Einstein comes out with General Relativity, and shows that his own earlier work, while perfectly applicable within its proper domain, is really just a special case of still more general laws (hence "general" vs. "special" relativity). Science works by adding more layers to the outside of the onion. Old theories are not so often disproved by new ones as they are generalized and subsumed by them.
My seeing of red is not a philosophy; it is not a way of thinking about or interpreting some theory or idea; it is not an abstraction; it is not an inference I have drawn or some metaphysical gloss I have put over reality. It is a brute fact about the universe, a fact of Nature. It is really, really there. It is explanandum, not explanation. As such, it is incumbent upon our natural science to explain it. If my seeing of red is not amenable to the currently accepted methods of natural science, then so much the worse for those currently accepted methods. Those who deny the existence of qualitative consciousness remind me of the church officials who refused to look through Galileo's telescope because they did not want their neat and tidy theological world upset by what they might see.
So where do we go from here? Loopy as it sounds, consciousness, or something that scales up to consciousness in certain kinds of systems, must be built in at the ground floor, as part of the fundamental furniture of the universe. Someday, after we have pinned it down a bit, it will stand right up there with mass, charge, and spin. This view is traditionally called panpsychism, but some people prefer pan-protopsychism to emphasize that it is not consciousness as we know it that stands as a fundamental building block of the universe, but some tiny crumb or spark that, when scaled up, aggregates into full-blown human consciousness under certain conditions or in certain types of systems. Also, "panpsychism", to some people has medieval, vitalist connotations; most contemporary panpsychists want to dissociate themselves from the belief that "rocks think". No one knows (yet) the principles according to which proto-consciousness aggregates into full-blown human consciousness, or what is so special about brains that they support this aggregation. In the range of potential answers to these questions there is room for many different versions of panpsychism, some more conservative (for lack of a better term) than others. It may well be that consciousness scales up only under very particular circumstances, not normally found in nature, but which natural selection has stumbled upon and exploited as it "engineered" brains.
1 It is not really a contradiction to say that my subjective experience is an objective fact of the universe.
2 It has been suggested that philosophy would benefit if the word "mere" and its synonyms were banished from discourse.