One man's algorithm is another man's data.


Representations and the Mind: Descriptive vs. Prescriptive Information

A lot of people over the last century or so have tried to pin down what makes minds special or interesting in terms of the mind's ability to represent. They have often tried to characterize the sorts of systems (rarely minds, mind you, always "systems") that produce and use representations or models. Of particular interest are systems that have a model of themselves. There is even a school of thought called Representationalism that holds, roughly, that conscious states are conscious only insofar as they are representational states. That is, in order be conscious, a mental state must represent.

This is exactly backwards. It makes the representing primary, and the consciousness a secondary effect of the representing. But representation, construed in a reductionist/physicalist framework, is a great example of a may-be-seen-as kind of phenomenon. Lots of things in the physical world may be seen as representing other things, but there is no inherent, principled sense in which any of those things absolutely, definitely, really does represent another thing, at least no sense that Nature is bound to respect or behave differently because of. As I and others have argued, consciousness is a really-there kind of phenomenon, and may-be-seen-as phenomena just won't do as explanations of really-there phenomena.

I happen to have an old answering machine on my home telephone. When I don't pick up the phone, it tells whoever is calling that I'm not home right now. It is a classic, purely causal, beer-can-falling-off-a-fence-post physical system. To what extent is it really, truly, representing me as not being home right now? How much more internal state would it have to have, "modeling" the world in some special way, perhaps processing this model in an "integrated" way, before we would say that yes, it really was representing me as not being home, in any way that was relevant to these discussions? Piling on more beer cans buys you nothing, absolutely nothing, in terms of really representing, if there is such a thing.

If you found an alien artifact, could you, by reverse engineering alone, determine with certainty that it did or did not contain a model or representation? Are you sure there is a fact of the matter? Due to minute (but non-zero) gravitational forces, the state of my car's spare tire varies in ways that correspond to the states of the Hoover Dam. Is my car's spare tire a model of the Hoover Dam, and if so, what special powers or properties does this fact confer upon the Hoover Dam/spare tire system, and why? In general, it is very hard to find a proponent of theories that crucially involve models and representations who explains exactly what makes one thing a model of something else. Like the concept of information, the concept of "representation" is often left frustratingly vague and abstract by the people who use it as a reductive base for their theories. I believe that some of the intuitions that lead people to ascribe such power to representation melt away if we examine the notion a little more closely. Specifically, I'd like to take a look at the distinction between a descriptive model or representation, and a prescriptive algorithm.

Information comes in two flavors: 1) prescriptive ("pick that up.") and 2) descriptive ("the museum is open today"). The opcodes that comprise a computer program at the lowest level are prescriptive information (they tell the CPU what to do during a given tick of the computer's internal clock), whereas the data upon which the program operates (whether that data comes from in the computer's memory or from outside, through an input device) constitutes descriptive information. Descriptive information represents (or misrepresents) something, while prescriptive information tells you to do something. If a fragment of a computer program says, "If x is greater than 43, open the pod bay doors", the fragment itself is prescriptive, while the number being examined, the x, is descriptive data.

Most people think of information as primarily descriptive: it sits there, and you hold it before you and regard it: "Oh, so Bismarck is the capital of North Dakota. How interesting." But algorithms are information too ("Go three blocks, turn left at the light, pull into the Krispy Kreme drive-through and order a dozen hot glazed doughnuts.") Information can be prescriptive as well as descriptive, it can tell you what to do as well as inform you of something. As far as information theory is concerned, Shannon's laws, etc. don't care at all whether the information is taken as descriptive or prescriptive by the eventual receiver of the information. Any string of 0's and 1's has the same bandwidth requirements on the wire and is quantified exactly the same way whether regarded as descriptive or prescriptive, as data or algorithm.

If you find a computer file full of binary data, and you have no way of telling what the data was used for, you can not tell whether the file constitutes descriptive or prescriptive information. There is no fact of the matter, either, if you just consider the computer's disk itself as a physical or even an informational artifact. Its just a bunch of 1s and 0s. For you to make the prescriptive/descriptive distinction, you must know what the disk was intended for, and in particular, you must know a lot about the system that was supposed to read it and make use of it. Only by taking the receiver of the information into account, and looking closely at how it processes the information, can we determine whether the file on the disk constitutes data or algorithm. Does the receiving system open the file and treat it as salary records, or does it load up the file and run it as a program? Indeed, one system could treat it as a program, and another could treat it as data, compressing it perhaps, and sending it in an email message. The choice of whether a given piece of information is prescriptive or descriptive depends on how you look at it; the distinction between description and prescription, in the terms of information and information processing, is a may-be-seen-as kind of distinction.

Consider the AND gate. An AND gate is a very simple piece of circuitry in a computer, one of a computer's most basic logic components, in fact. It is a device that takes two bits in and produces one bit as output. In particular, it produces a 0 if either (or both) of its input bits is 0, and produces a 1 if and only if both input bits are 1. That is to say, it produces a 1 as output if and only if input1 AND input2 are 1. Note that the operation of the AND gate is symmetrical: it does not treat one input bit as different from the other: 1 AND 0 gives the same result (0) as 0 AND 1. Another way of saying this is that the AND operation obeys the commutative law. The operation of the AND gate is summarized in the following truth table:

input1input2input1 AND input2
000
010
100
111

But now let's arbitrarily designate input1 as the "control" bit and input2 as the "data" input. Note that when we "enable" the control input (i.e. we make it 1) the output of the whole AND gate is whatever the data input is. That is, as long as the control input is 1, the data input gets passed through the gate unchanged, and the AND gate is effectively transparent. If the data input is 0, then the AND gate produces a 0. If the data input is a 1, then the AND gate produces a 1.

When we "disable" the control input however, (i.e. we make it 0), the output of the whole AND gate is always 0, no matter what the data input is. By holding the control input 0, we turn off the transmission of the data bit. So the control input gets to decide whether to block the data input or let it though untouched. It is the gatekeeper. But (and here is the punchline) because of the symmetry of the AND gate, our choice of which input (input1 or input2) is the "control" and which is the "data" was completely arbitrary! The decision of which input is the prescriptive input telling the gate what to do with the descriptive input is purely a matter of perspective.

Strictly speaking, from a certain point of view, there is no such thing as descriptive information - all information is ultimately prescriptive. Insofar as information has any effect on a receiver or information processor at all (which is to say, insofar as it is informative), it is making the processor do something. The data in a music MP3 constitutes an algorithm that instructs or commands a suitably configured machine to construct the sound waves that make up the music.

Think of a given piece of the information as a physical thing, say a tiny area on the surface of a computer disk that is magnetized one way or another way, indicating a 0 or a 1. If this area is to constitute information at all, it must be causally efficacious. That is, something else must do something, or not do something, or do something differently, because of the particular way that area is magnetized. For the magnetized area on the surface of the disk to be informative at all, it must make something else do something, just as a rock I throw makes a beer can fall off a fence post. This sounds pretty prescriptive. Nothing happens by virtue of the information simply being itself. At some physical level, it always comes down to the information (or more precisely, the information's physical carrier or substrate) pushing something else around, forcing a change on some other physical thing. Moreover, any physical system that forced the same kind of change of the receiver would thereby constitute the exact same information as far as that receiver was concerned.

A computer does what it does because of an algorithm, or a program in its memory. This algorithm is prescriptive information. It consists of a series of commands, and the computer does whatever the currently loaded command tells it to do. The computer itself (or its CPU) comprises the context in which the individual commands have meaning, or rather the background dispositions which determine what each command will make the computer do.

The data that the computer processes may be considered descriptive information. But to the extent that the computer's internal state changes on the basis of the data it is processing, hasn't the data dictated the machine's state, and thus its behavior? "If x is greater than 43, open the pod bay doors": isn't x here an opcode, whose value tells the computer to open the pod bay doors or not? The "data" is either not there for you at all, or it makes you do something. It is the cue ball: it knocks into other balls and sets them on an inevitable course of motion. All data are opcodes.

The prescriptive aspect of the supposedly descriptive data in a computer is obscured by the fact that the data lacks a clear, stable context in which its effects are felt, whereas the same CPU tends to do the same thing each time when given the same opcode. The effects of different data are highly dependent on the current state of the machine. Nevertheless, after the data is read, the machine's state is different because of the specific value of the data, and the machine will behave differently as a result. The machine acts differently because of this data, just as it acts differently on the basis of different opcodes in its algorithm. There is no principled natural distinction that can be drawn between the information that comprises the algorithm and that which comprises the "data" on which the "algorithm" operates.

Internal Models of the Self

There are theories of consciousness that regard consciousness as a product of the interaction of a system with an internal model within itself. What sort of additional information does an internal model provide the larger system that it could not have derived on its own (given the external stimuli), and how does this additional information confer consciousness? It seems that if we have a system that contains an internal model, we could optimize it a bit, and integrate the model a little more tightly into the rest of the system. Then maybe we could optimize a little more, and integrate a little more, all the while without losing any functionality. How would you know, looking at such a system, if it just didn't have an internal model anymore, or it did but its model was distributed throughout in such a way that it was impossible to disentangle it from the rest of the system? In the latter case, what power did the notion of the internal model ever have?

The problems with thinking that there is something special about self-models are similar to those that plague HOT theories: once you separate out some aspect or module as special to the system as a whole, the specialness really comes from the communications channel between that module and the rest of the system, and we are right back where we started.

Let us assume a conscious system that has a distinct model (either a model of itself, or a model of the world, or a model of the world including itself - whatever kind of model deemed necessary to confer consciousness). In good functionalist fashion, let us denote this in our schematic diagram of the whole system with a box labeled "model". Between the model box and the rest of the system is a bidirectional communication channel or interface of some kind. This kind of thing is often denoted in schematic diagrams as a fat double-ended arrow connecting the "model" box and the box or boxes representing the rest of the system. Let us call this interface the API (for Application Programming Interface, a term borrowed from computers). This API may be quite complex, perhaps astronomically so, but in principle all communication between the rest of the system and the "model" box can be characterized and specified: the kinds of queries the rest of the system asks the model and the kinds of responses the model gives, and the updates from external stimuli that get fed into the model.

People who believe in these sorts of theories generally claim that the rest of the system is conscious, not the model itself. Because, by hypothesis, all communication between the (purportedly conscious) rest of the system and the model takes place over the API, the consciousness of the rest of the system comes about by virtue of the particular sequence of signals that travel over the API. The (conscious) rest of the system does not know, can not know, and does not care, how the model is implemented: what language it is written in, what kinds of data structures it uses, whether it is purely algorithmic with no data structures at all except for a single state variable, or even purely table-driven in a manner similar to Ned Block's Turing Test beater. It could well be completely canned, the computational equivalent to a prerecorded conversation played back. As far as the rest of the system is concerned, the model is a black box with an interface. Let us just think of it then, as an algorithm, a running program.

Once you separate the model from the rest of the system conceptually, you necessarily render it possible (in principle) to specify the interface (API) between the rest of the system and the model. And once you do that, there is nothing, absolutely nothing, that can happen in the rest of the system by virtue of anything happening in the model that does not manifest itself in the form of an explicit signal sent over the API. Anything that properly implements the model's side of the conversation over the API is exactly as good as anything else that does so as far as any property or process in the rest of the system is concerned. All that makes the model a model is the adherence to the specification of the API. The model is free, then, to deviate very far from anything we might intuitively regard as a "model" of anything as long as it keeps up its side of the conversation, with absolutely no possible effect on the state of the rest of the system.

As any model-based system can be fairly characterized in this way, I have a hard time seeing what intuitive pull this class of theories has for its fans. Remember, what we are looking for is something along the lines of "blah blah blah, the model gets updated, blah blah blah, and therefore red looks red to us in exactly the way that it does." What magic signal or sequence of signals travels over that API to make the system as a whole conscious?

In information systems as traditionally conceived, there are no models, no representations, no data. It is all algorithm. As engineers, we may find it useful to draw a line of our choosing and call the stuff on the left side "data" and the stuff on the right side "algorithm" or "data processor", but this is not a principled distinction. It is ad hoc, a may-be-seen-as distinction. Any theories of mind that depend on certain kinds of "models" or "representations" being operative then degenerate back into strict functionalism, since the models they speak of turn out to be just more algorithm, just as if they were utility subroutines.

If all information is, at heart, prescriptive, then what becomes of reference, or self-reference in particular? Thinkers have been very interested in self-reference for most of the last century, but what is so special about it? If information is prescriptive or algorithmic, then all supposed cases of referential loops turn out to be causal loops like the earth revolving around the sun, or the short computer program "start: do some stuff; go back to start". A computer routine that is recursive is one that calls itself, like the factorial calculator. Recall that, for instance, 5 factorial (written 5!) is 5 * 4 * 3 * 2 * 1, or 120. The computer program to calculate that looks like this:


factorial(input)
{
    if (input is 1) then return 1
    else return (input * factorial(input - 1))
}

This routine calls itself with the next lower number, then finally when the number reaches 1, it returns a 1. This routine, then, is self-referential. But as far as the computer running it is concerned, there is nothing special or mind-bending about it. It niether knows nor cares that it is calling itself rather than a long series of separate routines. At each call, it just adjusts its Program Counter register to go wherever it is told to go, pushing some stuff on the stack. One hundred different routines, or one hundred calls of the same routine, it makes no difference to the computer. In this, the computer is right.

Mary and the Ability Hypothesis

The prescriptive/descriptive distinction has interesting implications for those who take issue with Jackson's black-and-white Mary thought experiment by claiming that upon being released from the black and white room, Mary does not acquire any new knowledge, but rather she gains a new ability. What's the difference? She adds to the store of information in her head. She either adds to her repertoire of descriptive information (knowledge), or she adds to her repertoire of prescriptive, algorithmic information (ability). To claim that any great arguments or counterarguments about consciousness depend on it being one way or the other presupposes a real hard and fast, really-there distinction between the two, as well as our ability to tell the difference. At least from a materialist point of view, both are lacking.

Where does the intuitive appeal of philosophies like representationalism come from? Part of it, I think, is the idea that the system, the processor, can respond dynamically to the representation, the data. It can choose, albeit in a deterministic way. This intuition loses some of its strength when you fold the "data" into the algorithm, however. If you take the data that the algorithm is presumed to respond dynamically to and declare it to be just part of the whole algorithm, the algorithm doesn't seem quite so dynamic anymore.

Algorithms are deterministic. Or rather, their physical manifestations obey the laws of classical physics, and those laws exhaustively account for their behavior. They crunch along on steel rails of causality. If you look closely enough at them, there are no options open to them, no choices whatsoever. They do what they must. That's what it means to follow an algorithm. If I knock a beer can off a fence post with a rock, it falls to the ground. This is the essence of an algorithm. There is no way even of saying that an algorithm runs correctly or incorrectly. It just runs (the beer can falls off the fence post). In particular, there is no sense in saying that an algorithm is true or false. It just does what it does. It neither represents nor does it misrepresent. It just does. (Or rather, and importantly, I think, whoever or whatever faithfully executes the algorithm just does. The algorithm itself just sits there).

Stuff does what it does because of the laws of physics, because of diodes discharging electrical impulses and such, not because of any "algorithm" being followed by an "information processor". The intuition that there is a certain plasticity inherent in algorithms, that they could do other things than what they do is a mirage, a product of our cognitive limitations. Algorithms do what they must do. If I don't throw the rock, the beer can will stay on the fence post. No algorithm displays any more plasticity than this, really. While it may seem that an algorithm could behave differently given different data to operate on (if x equals 23, the pod bay doors stay closed), it would also behave differently if some of its subroutines were rewritten, but then it wouldn't be the same algorithm (if x equals 86, activate the espresso maker).

When people speak of algorithms and look to them for the special sauce of consciousness, or anything ontological, they are anthropomorphizing, projecting aspects of the mind outward into other stuff. Outside of certain limited technical contexts, the whole idea of the algorithm is an attempt to breathe life into dead, inanimate, Newtonian information, to make it jump up and run around, to give it some inherent motive power.

There is another aspect of the idea of representations that exerts an even stronger intuitive pull than the idea of the "system" exercising choice on the basis of the "model", however. This stronger pull comes from our own minds and the way they work. We simply do use representations. We, as conscious minds, do have a separate identity from the simulations of reality we create and tinker with in our heads. We do stand back from our models, regard them, and make decisions based on them.

There are descriptions in the universe. They just aren't information, in the strict information theory sense. They are qualitative, all-at-once comprehensions. That is to say, information takes on its descriptive aspect only when we step back and take it in all-at-once, when in our minds, it ceases to be a series of behavioral dispositions and becomes a single thing, a partless whole. This ability of ours, as I have argued, is a unique, spooky mysterious thing minds and only minds do, like seeing red. If we are honest and want to limit ourselves to the reductionistic language of information processing, we may only speak of prescriptive information, unless we are speaking loosely, metaphorically, anthropomorphically. The descriptive aspect of information is a qualitative product of minds. Representation is real, a really-there aspect of our universe, and well worth exploring. But this exploration can not even get out of the harbor unless we regard representation as an aspect of consciousness.


Go back up to the main page