© John Gregg, all rights reserved

We evolved with ten fingers, and so we adopted a base ten number system. There is no reason to think that base 10 is the best number system just because of this accident of evolutionary biology. Computers use base two. This works wonderfully for a number of reasons. First, as Claude Shannon pointed out in 1937, it allows us to implement George Boole's system of algebraic logic to switching machinery. There is a very nice way in which the two states in Boolean logic can be interpreted logically (false and true) when it suits us, and arithmetically (numeric 0 and 1) at other times. Boole gave us a neat way of thinking about 0 and 1, and certain operations that could be performed on them, that jibes well with the way we already think. He named his book, after all, "An Investigation Of The Laws Of Thought"!

Since these days we work with solid state electronics, it is also pretty easy to decide that, say, 0 volts is Boolean 0 and +5 volts is Boolean 1, and establish some cutoff thresholds with a buffer of indeterminacy between them, and glue some transistors together to implement Boole's operations in silicon. Circuits that only have to distinguish between two states are easy from an engineering perspective.

Might we someday be working with an implementation substrate that naturally allows for three states, not two? Something involving wavelengths of light, or polarity of light, or something? Maybe good old fashioned electricity might even be considered tristate, depending on what we are using as our signal. I have no idea.

I do know, however, that just from an arithmetic point of view, base two is a pain for humans. The numbers get big too quickly. No one wants to balance their checkbook by hand in binary. On the other hand, what if we used, say, base 100? That would be zero plus 99 other individual numerals, or glyphs, whose order we would all have to just memorize, in the same way we have burned in the fact that 7 is greater than 3. That seems unwieldy. So what would be the ideal base to use, with the trade-off between number of individual numerals and numbers getting too big too quickly? I once read [citation needed] that the best base "converges on e". OK, so e is 2.72, which rounds to 3 (assuming that we want to keep our base an integer!)

Wikipedia has entries for ternary logic and ternary computers, and apparently the Soviets did some work along these lines way back when.

I want to emphasize that inside a computer, there are different voltage levels and devices that manipulate voltage levels. The machine, once it is finished and works, does not care that we decided that 0 volts = false = numeric 0, and +5 volts = true = numeric 1. These conventions are immensely helpful to the human engineers, but there is nothing inherent in the switching circuitry that carries the residue of these conventions.

Similarly, if you have a tristate system, you are free to interpret your states as [false, indeterminate, true], [-1, 0, 1], [0, 1, 2] or whatever suits you, and in whatever order. By the way, whereas in base 2 we call our BInary Digits "bits", in base 3 we speak of "trits".

The appeal of bistate Boolean logic, as I see it, is twofold (ha!). First, as I said, Boolean logic gives us a tiny handful of primitive operations (AND, OR, and NOT) that align perfectly with normal, verbalized propositional logic and its connectives that any child understands.

Secondly, using the primitive Boolean operations of AND, OR, and NOT, there are mindless, mechanical ways of implementing any function, with any number of inputs, that we can write out a truth table for. Ignoring optimization, we can use sum-of-products or product-of-sums to just read the lines off the truth table and bang out a formula (or circuit).

For a brief overview of two-valued traditional Boolean logic, including the implementation of an arbitrary function (in this case, a single bit-slice of an adder), see the "Boolean Algebra" section and "Binary Adder" subsection of my high school presentation about how computers work.

Is there an interpretation, a convention, for tristate logic that has that same simple, intuitive, ease of understanding that Boolean logic does? Note that this is a question both of interpreting the three states themselves and of choosing and interpreting the operations upon them. Frankly, I personally don't mind if the convention we come up with seems a little weird at first. I think that we can train ourselves to think the "right" way if the logic works functionally. So I will defer this question, and let its answer be guided by other, more technical considerations, like our other question.

Are there operations we can use in tristate logic that allow us to mindlessly bang out any function we can write a truth table for, with any number of tristate inputs?

As a first pass, we might want to look at traditional Boolean logic and see if we can extend it to base 3. Because sum-of-products (SOP) comes more naturally to me than product-of-sums (as, I imagine, it does for most people), I will focus on that. Can we do sum-of-products in base 3?

In Boolean logic, sum-of-products involves all three of our primitives, AND, OR, and NOT. What are the ternary equivalents of these? First, let me establish an ad hoc convention concerning our three states. Let's say they are the colors [red, green, blue], some physically distinguishable characteristic of the world. For the purposes of what follows, I just need to establish some order among them, so I choose to call them [0, 1, 2]. I could just as easily, and with as much validity, chosen [-117, π, 53]. My point is that thinking of our states as [0, 1, 2] does not commit us to this particular numerical interpretation for all time (although it does not disallow it either!), it merely helps establish an ordering convention, i.e. 0 < 1 < 2.

OK, so [0, 1, 2]. Let's see if we can extend AND and OR to our ternary system. We are taught AND and OR in terms of absolutes, truth values. Note, however, that if we think of their inputs not as [false, true] but as numeric [0, 1], AND is simply the minimum function, and OR is the maximum function. That is, AND yields as output the answer to the question "which of my inputs is the least?" and OR yields as output the answer to the question "which of my inputs is the greatest?" That sounds promising, so let's just carry that forward into base 3. The MAX() function simply returns the greatest of its abritrary number of inputs, and is our OR equivalent, and MIN() returns the least of its abritrary number of inputs and is our AND equivalent.

With standard Boolean SOP form, you start with the truth table for your
desired function or circuit, and you write an expression like this:

```
term
```

_{1} OR term_{2} OR term_{3} OR ... OR term_{n}

With one term for each 1 that appeared in the output column of your truth table.
Each term consists of all of your input variables, maybe negated with NOT,
all ANDed together. So each term matches on one particular combination
of the input variables, and then the term becomes 1, and the whole expression
becomes 1. If no terms match, all terms are 0,
and the whole expression is 0.

For our ternary SOP, we want one term (a product, a bunch of stuff MINed together) for each non-0 in the output column of the truth table, and we want each term to activate, that is, contribute to the final value of our expression, only for its particular combination of input trit values, and be 0 in all other cases.

This operation will work fine given our common-sense extension of AND as MIN and OR as MAX, but requires us to get a little cute with NOT.

How many possible unary operations are there in base 2? We have a single
input bit, so there are two rows in the truth table.
There are 2^{2} = 4
ways of filling out the output column:

input | output 0 | output 1 | output 2 | output 3 |
---|---|---|---|---|

0 | 0 | 0 | 1 | 1 |

1 | 0 | 1 | 0 | 1 |

Of the four output columns here, output 0 is just the constant 0, output 3 is just the constant 1, and output 1 is just a pass-through pipe, leaving its input bit unchanged. Only output 2 is even mildly interesting, flipping its input bit. It is, of course, Boolean NOT.

How many possible unary operations are there in base 3? A single input
trit gives us a three row truth table, and each row could, in its output
column, contain any of three values, so there are 3^{3} = 27
different ways of filling out that output column.
Tediously, I'm going to list them here, and number each line in good old
base 10:

0 | 000 |

1 | 001 |

2 | 002 |

3 | 010 |

4 | 011 |

5 | 012 |

6 | 020 |

7 | 021 |

8 | 022 |

9 | 100 |

10 | 101 |

11 | 102 |

12 | 110 |

13 | 111 |

14 | 112 |

15 | 120 |

16 | 121 |

17 | 122 |

18 | 200 |

19 | 201 |

20 | 202 |

21 | 210 |

22 | 211 |

23 | 212 |

24 | 220 |

25 | 221 |

26 | 222 |

I realize that I've written this out vertically, and I, for one, have a hard time envisioning it flipped by 90 degrees so each line in the above looks more like an output column in our unary function's truth table, but bear with me. For our purposes, I'm reading left to right in the table above, so row 22 (211) corresponds to a function whose truth table is this:

input | output |
---|---|

0 | 2 |

1 | 1 |

2 | 1 |

Some of these 27 unary functions are more interesting than others, at least at first blush. Note that lines 0, 13, and 26 are constants 0, 1, and 2, respectively, thus boring. Just about as boring is line 5, which is a pass-through pipe, leaving its input unchanged. 15 is add 1 (modulo 3) and 19 is subtract 1 (modulo 3), a bit more interesting, may be useful later. Lines 7, 11, and 21 pass one value through unchanged, but exchange the other two, which might correspond somewhat with our intuitive notion of NOT, depending on the convention we choose for our logical truth values. For example, if we think of our three states as [false, indeterminate, true], it may well turn out that it would be useful to define ternary NOT of these respective values as [true, indeterminate, false]. That is, one (indeterminate) gets passed through unchanged, and the other two get flipped.

For the purposes of the current exercise, that of coming up with a ternary SOP realization of an arbitrary function, I'd like to focus on the lines with two zeros among the three lines, that is, these:

1: 001 "Give me a 2, and I'll give you a 1, 0 otherwise."

2: 002 "Give me a 2, and I'll give you a 2, 0 otherwise."

3: 010 "Give me a 1, and I'll give you a 1, 0 otherwise."

6: 020 "Give me a 1, and I'll give you a 2, 0 otherwise."

9: 100 "Give me a 0, and I'll give you a 1, 0 otherwise."

18: 200 "Give me a 0, and I'll give you a 2, 0 otherwise."

To make these six a little clearer, I will put them in more traditional truth table form:

input | 1 | 2 | 3 | 6 | 9 | 18 |
---|---|---|---|---|---|---|

0 | 0 | 0 | 0 | 0 | 1 | 2 |

1 | 0 | 0 | 1 | 2 | 0 | 0 |

2 | 1 | 2 | 0 | 0 | 0 | 0 |

For now, we need a convention for what to call these six unary functions.
Let's call them, for example, D_{12} for "Demand 1, yield 2". This
filtering function is what we can use in place of traditional NOT
in our ternary SOP realizations.

Let's make up an arbitrary three-input ternary function that we want to implement. Let's call the input trits [a,b,c]. Instead of writing out all 27 rows of the truth table, assume that all rows have 0 in the output column except these:

abc | output |
---|---|

002 | 2 |

011 | 1 |

121 | 1 |

201 | 2 |

220 | 2 |

You know, I have never ever liked calling AND multiplication and OR addition, so the whole sum-of-products terminology never sat well with me. Now that we have extended AND and OR into base 3 in a way that has nothing at all to do with addition and multiplication, I'm going to start calling it max-of-min realization.

Given the above, the above function could be realized with the following:

```
max[
```

min(D_{02}(a), D_{02}(b), D_{22}(c)),

min(D_{01}(a), D_{11}(b), D_{11}(c)),

min(D_{11}(a), D_{21}(b), D_{11}(c)),

min(D_{22}(a), D_{02}(b), D_{12}(c)),

min(D_{22}(a), D_{22}(b), D_{02}(c))

]

As with traditional SOP Boolean form, there is one term per non-0 output line in the truth table, and each term evaluates to 0 unless the inputs [a,b,c] have the particular values that that term looks for. In that case, the term assumes the proper value, and gets ORed (or MAXed) together with all the other terms (which are 0, since their particular combination of input values was not hit). Now we are off to the races, and can realize any function we want.

In base 2, we use two's complement to represent negative integers. This works wonderfully for a couple of reasons. First, you simply don't have to have any special handling for negative numbers at all. You just add numbers, and if one is negative, well, you've just done a subtraction. Secondly, the most significant bit is easily recognizable as a "sign bit". You can tell immediately if a number is negative or positive by just looking at the sign bit. This helps, for example, in setting the "negative" status bit in the CPU after each instruction, for subsequent branching or test instructions.

Fortunately, the same logic that allows us to do two's complement
also works in base 3, or any base. To create the negative of a number in the
range of positive numbers, we do the following. For each digit,
replace it with <base> - <digit> - 1. That is, after we
are done, we have a number that when added to the number we started
with, will yield <base>^{# of digits} - 1. So if we are dealing
in base 10, and are dealing with a 3-digit wide register,
and we want to create negative 281, we write 718.
Sure enough, 281 + 718 = 999. So in base 10, 718 is the complement
of 281.

Second, we just add 1 to our complement, so 718 becomes 719. Viola! 719 is the negative of 281! Huh? So 353 - 281 = 72. Instead of subtracting 281, we want to add its negative, which we just said is 719. So 353 + 719 = 1072. But since we are dealing with a 3-digit wide register, the thousands digit falls off the end of the world, and our 1072 becomes 072.

Anyway, the math works in any base, including base 3. For more discussion, check out the wikipedia article linked above. This all works by splitting the entire range of values in half, where the first half is positive, and the second half is negative. As I said above, in base 2 this has the happy side effect that the most significant bit is a sign bit, end of discussion. In any odd base, that halfway point is in the middle of a range of the most significant digit. This has the unhappy consequence that it looks like we must do a full-register comparison to determine if a number is negative or not, as when we are setting that CPU status bit after each instruction. Oh well.

As with two's complement in binary, we shifted the range of
numbers we can represent in an n-digit register from
0 → base^{n} - 1 to half that range in the
positives and half in the negatives. That is, the largest number
we can hold in a given register is only half as large as it would
be if we were only using unsigned arithmetic.

Here is the set of 2-trit ternary numbers, 0-8, with their complements and complements + 1 (i.e. negatives). Remember that overflows fall off the end of the world, so 22 + 1 = 00. Note that 0 is its own negative. Note also that the largest positive number here must be 11, or 4, and its negative is the next entry, 12, or 5, which must be -4 (note that 11 and 12 are negatives of each other in the table). This makes sense, since that splits our range of 0-8 in half (1→4 positive, 5→8 negative), given that 0 is its whole own thing at the beginning.

line number | complement | complement + 1 | unsigned base 10 of line number | signed base 10 of line number |
---|---|---|---|---|

00 | 22 | 00 | 0 | 0 |

01 | 21 | 22 | 1 | 1 |

02 | 20 | 21 | 2 | 2 |

10 | 12 | 20 | 3 | 3 |

11 | 11 | 12 | 4 | 4 |

12 | 10 | 11 | 5 | -4 |

20 | 02 | 10 | 6 | -3 |

21 | 01 | 02 | 7 | -2 |

22 | 00 | 01 | 8 | -1 |