We evolved with ten fingers, and so we adopted a base ten number system. There is no reason to think that base 10 is the best number system just because of this accident of evolutionary biology. Computers use base two. This works wonderfully for a number of reasons. First, as Claude Shannon pointed out in 1937, it allows us to implement George Boole's system of algebraic logic to switching machinery. There is a very nice way in which the two states in Boolean logic can be interpreted logically (false and true) when it suits us, and arithmetically (numeric 0 and 1) at other times. Boole gave us a neat way of thinking about 0 and 1, and certain operations that could be performed on them, that jibes well with the way we already think. He named his book, after all, "An Investigation Of The Laws Of Thought"! I also wrote a book about this some years ago.
These days we work with solid state electronics. It is pretty easy for engineers, as a matter of convention, to decide that, say, 0 volts is 0 and +5 volts is 1. Then they can go on to establish some cutoff thresholds with a buffer of indeterminacy between them, and glue some transistors together to implement Boole's operations in silicon. Circuits that only have to distinguish between two states (the big voltage and the not-big voltage) are easy from an engineering perspective. Might we someday be working with an implementation substrate that naturally allows for three states, not two? Something involving wavelengths of light, or polarity of light, or something? Maybe good old fashioned electricity might even be considered tristate, depending on what we are using as our signal. Maybe we get more computational bang for the buck from an energy consumption and/or heat dissipation standpoint with ternary rather than binary logic. I have no idea.
I do know, however, that just from an arithmetic point of view, base two is a pain for humans. The numbers get big too quickly. No one wants to balance their checkbook by hand in binary. On the other hand, what if we used, say, base 100? That would be zero plus 99 other individual numerals, or glyphs, whose order we would all have to just memorize, in the same way we have burned in the fact that 7 is greater than 3. That seems unwieldy. So what would be the ideal base to use, with the trade-off between the number of individual numerals for big bases like base 100, and numbers getting too big too quickly for little bases like 2? I once read  that the best base "converges on e". OK, so e is 2.72, which rounds to 3 (assuming that we want to keep our base an integer!). Who knows what criteria they were using, but certainly it's worth asking how a base 3 computer might work. Whereas in a base two, or binary computer, we speak of bistate logic, in a base 3, or ternary computer, we would be using tristate logic. Similarly, binary computers manipulate "bits" (short for BInary Digits), while ternary computers deal in "trits".
Wikipedia has entries for ternary logic and ternary computers, and apparently the Soviets did some work along these lines way back when.
Inside a computer, there are no ones or zeros. There are only different voltage levels and devices that manipulate voltage levels. The machine, once it is finished and works, does not care that we decided that 0 volts = false = numeric 0, and +5 volts = true = numeric 1. These conventions are immensely helpful to the human engineers, but there is nothing inherent in the switching circuitry that carries the residue of these conventions. Similarly, if you have a tristate system, you are free to interpret your states as [false, indeterminate, true], [-1, 0, 1], [0, 1, 2] or whatever suits you, and in whatever order.
The appeal of bistate Boolean logic, as I see it, is twofold (ha!). First, as I said, Boolean logic gives us a tiny handful of primitive operations (AND, OR, and NOT) that align perfectly with normal, verbalized propositional logic and its connectives that any child understands. You can have ice cream when you have cleaned your room AND done your homework. You can go to college if you get good grades OR ace the SATs (or both) combined with NOT serving jail time. Even in highly technical fields, ways of doing things often become standardized because people just feel comfortable with them, and find them intuitively easy to think about.
Secondly, using the primitive Boolean operations of AND, OR, and NOT, there are mindless, mechanical ways of implementing any function, with any number of inputs, that we can write out a truth table for. Ignoring optimization, we can use sum-of-products or product-of-sums to just read the lines off the truth table and bang out a formula (or circuit).
For a brief overview of two-valued traditional Boolean logic, including the implementation of an arbitrary function (in this case, a single bit-slice of an adder), see the "Boolean Algebra" section and "Binary Adder" subsection of my high school presentation about how computers work.
With regard to the first reason for why we settled on binary logic, is there an interpretation or convention we could settle on for tristate logic that has that same simple, intuitive, ease of understanding that Boolean logic does? Note that this is a question both of interpreting the three states themselves and of choosing and interpreting the operations upon them. Frankly, I personally don't mind if the convention we come up with seems a little weird at first. I think that we can train ourselves to think the "right" way if the logic works functionally. So I will defer this question, and let its answer be guided by other, more technical considerations, like our other question.
Are there operations we can use in tristate logic that allow us to mindlessly bang out any function we can write a truth table for, with any number of tristate inputs?
As a first pass, we might want to look at traditional Boolean logic and see if we can extend it to base 3. Because sum-of-products (SOP) comes more naturally to me than product-of-sums (as, I imagine, it does for most people), I will focus on that. Can we do sum-of-products in base 3?
In Boolean logic, sum-of-products involves all three of our primitives, AND, OR, and NOT. What are the ternary equivalents of these? First, let me establish an ad hoc convention concerning our three states. Let's say they are the colors [red, green, blue], some physically distinguishable characteristic of the world. For the purposes of what follows, I just need to establish some order among them, so I choose to call them [0, 1, 2]. I could just as easily, and with as much validity, chosen [-117, π, 53]. My point is that thinking of our states as [0, 1, 2] does not commit us to this particular numerical interpretation for all time (although it does not disallow it either!), it merely helps establish an ordering convention, i.e. 0 < 1 < 2.
OK, so [0, 1, 2]. Let's see if we can extend AND and OR to our ternary system. We are taught AND and OR in terms of absolute truth values. Note, however, that if we think of their inputs not as [false, true] but as numeric [0, 1], AND is simply the minimum function, and OR is the maximum function. That is, AND yields as output the answer to the question "which of my inputs is the least?" and OR yields as output the answer to the question "which of my inputs is the greatest?" That sounds promising, so let's just carry that forward into base 3. The MAX() function simply returns the greatest of its arbitrary number of inputs, and is our OR equivalent, and MIN() returns the least of its arbitrary number of inputs and is our AND equivalent.
With standard Boolean SOP form, you start with the truth table for your
desired function or circuit, and you write an expression like this:
term1 OR term2 OR term3 OR ... OR termn
With one term for each 1 that appeared in the output column of your truth table. Each term consists of all of your input variables, maybe negated with NOT, all ANDed together. So each term matches on one particular combination of the input variables, and then the term becomes 1, and the whole expression becomes 1. If no terms match, all terms are 0, and the whole expression is 0. (Again, if this is unfamiliar to you, see my presentation).
For our ternary SOP, we want one term (a product, a bunch of stuff MINed together) for each non-0 in the output column of the truth table, and we want each term to activate, that is, contribute to the final value of our expression, only for its particular combination of input trit values, and be 0 in all other cases.
This operation will work fine given our common-sense extension of AND as MIN and OR as MAX, but requires us to get a little cute with NOT.
How many possible unary operations are there in base 2? We have a single input bit, so there are two rows in the truth table. There are 22 = 4 ways of filling out the output column:
|input||output 0||output 1||output 2||output 3|
Of the four output columns here, output 0 is just the constant 0, output 3 is just the constant 1, and output 1 is just a pass-through pipe, leaving its input bit unchanged. Only output 2 is even mildly interesting, flipping its input bit. It is, of course, Boolean NOT.
How many possible unary operations are there in base 3? A single input trit gives us a three row truth table, and each row could, in its output column, contain any of three values, so there are 33 = 27 different ways of filling out that output column. Tediously, I'm going to list them here, and number each line in good old base 10:
I realize that I've written this out vertically, and I, for one, have a hard time envisioning it flipped by 90 degrees so each line in the above looks more like an output column in our unary function's truth table, but bear with me. For our purposes, I'm reading left to right in the table above, so row 22 (211) corresponds to a function whose truth table is this:
Some of these 27 unary functions are more interesting than others, at least at first blush. Note that lines 0, 13, and 26 are constants 0, 1, and 2, respectively, thus boring. Just about as boring is line 5, which is a pass-through pipe, leaving its input unchanged. 15 is add 1 (modulo 3) and 19 is subtract 1 (modulo 3), a bit more interesting, may be useful later. Lines 7, 11, and 21 pass one value through unchanged, but exchange the other two, which might correspond somewhat with our intuitive notion of NOT, depending on the convention we choose for our logical truth values. For example, if we think of our three states as [false, indeterminate, true], it may well turn out that it would be useful to define ternary NOT of these respective values as [true, indeterminate, false]. That is, one (indeterminate) gets passed through unchanged, and the other two get flipped.
For the purposes of the current exercise, that of coming up with a ternary SOP realization of an arbitrary function, I'd like to focus on the lines with two zeros among the three lines, that is, these:
function 1: 001 "Give me a 2, and I'll give you a 1, 0 otherwise."
function 2: 002 "Give me a 2, and I'll give you a 2, 0 otherwise."
function 3: 010 "Give me a 1, and I'll give you a 1, 0 otherwise."
function 6: 020 "Give me a 1, and I'll give you a 2, 0 otherwise."
function 9: 100 "Give me a 0, and I'll give you a 1, 0 otherwise."
function 18: 200 "Give me a 0, and I'll give you a 2, 0 otherwise."
To make these six a little clearer, I will put them in more traditional truth table form:
|input||function 1||function 2||function 3||function 6||function 9||function 18|
For now, we need a convention for what to call these six unary functions. Take a look at the function I've captioned "function 6" above. It has one ternary input and one ternary output. It will always produce a 0, unless it gets a 1 as input, in which case it will produce a 2 as output. So only a 1 will make it happy, and in that case it yields a 2. Let us establish a convention whereby we call such a function D12 for "Demand 1, yield 2". This set of six such filtering functions is what we can use in place of traditional NOT in our ternary SOP realizations.
You know, I have never ever liked calling AND "multiplication" and OR "addition", so the whole sum-of-products terminology never sat well with me. Now that we have extended AND and OR into base 3 in a way that has nothing at all to do with addition and multiplication, I'm going to start calling it max-of-min realization.
Let's make up an arbitrary three-input ternary function that we want to implement. Let's call the input trits [a,b,c]. Instead of writing out all 27 rows of the truth table, assume that all rows have 0 in the output column except these:
Given the last section above, this function could be realized with the following:
min(D02(a), D02(b), D22(c)),
min(D01(a), D11(b), D11(c)),
min(D11(a), D21(b), D11(c)),
min(D22(a), D02(b), D12(c)),
min(D22(a), D22(b), D02(c))
As with traditional SOP Boolean form, there is one term per non-0 output line in the truth table, and each term evaluates to 0 unless the inputs [a,b,c] have the particular values that that term looks for. In that case, the term assumes the proper value, and gets ORed (or MAXed) together with all the other terms (which are 0, since their particular combination of input values was not hit). Now we are off to the races, and can realize any function we want.
For the last section, about a procedure for max-of-min realizations of arbitrary ternary functions, we only needed to come up with an order for our trits, settling arbitrarily on [0, 1, 2]. For this section, I am going to go further and say no, I really actually want the three states to be [0, 1, 2] in our numerical calculations. The only plausible other interpretation would be [-1, 0, 1], and I don't think I am prepared to deal with the implications of having multiple-digit numbers where each digit has its own sign. It could be interesting for further study, though.
In base 2, we use two's complement to represent negative integers. This works wonderfully for a couple of reasons. First, you simply don't have to have any special handling for negative numbers at all. You just add numbers, and if one is negative, well, you've just done a subtraction. Secondly, the most significant bit is easily recognizable as a "sign bit". You can tell immediately if a number is negative or positive by just looking at the sign bit. This helps, for example, in setting the "negative" status bit in the CPU after each instruction, for subsequent branching or test instructions.
Fortunately, the same logic that allows us to do two's complement also works in base 3, or any base. To create the negative of a number in the range of positive numbers, we do the following. For each numeral in the number, replace that digit with <base> - <digit> - 1. That is, after we are done, we have a number that when added to the number we started with, will yield <base># of digits - 1. So if we are dealing in base 10, and are dealing with a 3-digit wide register, and we want to create negative 281, we write 718. Sure enough, 281 + 718 = 999. So in base 10, 718 is the complement of 281.
Second, we just add 1 to our complement, so 718 becomes 719. Viola! 719 is the negative of 281! Huh? So 353 - 281 = 72. Instead of subtracting 281, we want to add its negative, which we just said is 719. So 353 + 719 = 1072. But since we are dealing with a 3-digit wide register, the thousands digit falls off the end of the world, and our 1072 becomes 072.
Anyway, the math works in any base, including base 3. For more discussion, check out the Wikipedia article linked above. This all works by splitting the entire range of values in half, where the first half is positive, and the second half is negative. As I said above, in base 2 this has the happy side effect that the most significant bit is a sign bit, end of discussion. In any odd base, that halfway point is in the middle of a range of the most significant digit. This has the unhappy consequence that it looks like we must do a full-register comparison to determine if a number is negative or not, as when we are setting that CPU status bit after each instruction. Oh well.
As with two's complement in binary, we shifted the range of numbers we can represent in an n-digit register from 0 → basen - 1 to half that range in the positives and half in the negatives. That is, the largest number we can hold in a given register is only half as large as it would be if we were only using unsigned arithmetic.
Here is the set of 2-trit ternary numbers, 0-8, with their complements and complements + 1 (i.e. negatives). Remember that overflows fall off the end of the world, so 22 + 1 = 00. Note that 0 is its own negative. Note also that the largest positive number here must be 11, or 4, and its negative is the next entry, 12, or 5, which must be -4 (note that 11 and 12 are negatives of each other in the table). This makes sense, since that splits our range of 0-8 in half (1→4 positive, 5→8 negative), given that 0 is its whole own thing at the beginning.
|line number||complement||complement + 1||unsigned base 10 of line number||signed base 10 of line number|
The point is that negative numbers and arithmetic calculations in general should not be any harder in base three than they were in base two, with the slight caveat about not having a convenient sign bit, but that can be easily engineered around.
I am pretty convinced that numerically, we should interpret our three states as [0, 1, 2], but that still leaves open the question of how to interpret them logically. Should we have [false, indeterminate, true], or maybe [false, true, indeterminate]? Should the "indeterminate" value be a don't-care or disallowed entirely, just to keep things simple and intuitive, since that would match good old Boolean logic better? It may be that the answers to these questions don't matter very much at this point. Perhaps we should just start designing a CPU and its instruction set and see how far we get without answering them.