» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads)

[This article was published last month on the math.stackexchange blog, which seems to have died young, despite many earnest-sounding promises beforehand from people who claimed they would contribute material. I am repatriating it here.]

A recent question on math.stackexchange asks for the smallest positive integer for which the number has the same decimal digits in some other order.

Math geeks may immediately realize that has this property, because it is the first 6 digits of the decimal expansion of , and the cyclic behavior of the decimal expansion of is well-known. But is this the *minimal* solution? It is not. Brute-force enumeration of the solutions quickly reveals that there are 12 solutions of 6 digits each, all permutations of , and that larger solutions, such as 1025874 and 1257489 seem to follow a similar pattern. What is happening here?

Stuck in Dallas-Fort Worth airport one weekend, I did some work on the problem, and although I wasn't able to solve it completely, I made significant progress. I found a method that allows one to hand-calculate that there is no solution with fewer than six digits, and to enumerate all the solutions with 6 digits, including the minimal one. I found an explanation for the surprising behavior that solutions tend to be permutations of one another. The short form of the explanation is that there are fairly strict conditions on which *sets* of digits can appear in a solution of the problem. But once the set of digits is chosen, the conditions on that *order* of the digits in the solution are fairly lax.

So one typically sees, not only in base 10 but in other bases, that the solutions to this problem fall into a few classes that are all permutations of one another; this is exactly what happens in base 10 where all the 6-digit solutions are permutations of . As the number of digits is allowed to increase, the strict first set of conditions relaxes a little, and other digit groups appear as solutions.

### Notation

The property of interest, , is that the numbers and have exactly the same base- digits. We would like to find numbers having property for various , and we are most interested in . Suppose is an -digit numeral having property ; let the (base-) digits of be and similarly the digits of are . The reader is encouraged to keep in mind the simple example of which we will bring up from time to time.

Since the digits of and are the same, in a different order, we may say that for some permutation . In general might have more than one cycle, but we will suppose that is a single cycle. All the following discussion of will apply to the individual cycles of in the case that is a product of two or more cycles. For our example of , we have in cycle notation. We won't need to worry about the details of , except to note that completely exhaust the indices , and that because is an -cycle.

### Conditions on the set of digits in a solution

For each we have $$a_{P(i)} = b_{i} \equiv 2a_{i} + c_i\pmod R $$ where the ‘carry bit’ is either 0 or 1 and depends on whether there was a carry when doubling . (When we are in the rightmost position and there is never a carry, so .) We can then write:

$$\begin{align} a_{P(P(i))} &= 2a_{P(i)} + c_{P(i)} \\ &= 2(2a_{i} + c_i) + c_{P(i)} &&= 4a_i + 2c_i + c_{P(i)}\\ a_{P(P(P(i)))} &= 2(4a_i + 2c_i + c_{P(P(i)})) + c_{P(i)} &&= 8a_i + 4c_i + 2c_{P(i)} + c_{P(P(i))}\\ &&&\vdots\\ a_{P^n(i)} &&&= 2^na_i + v \end{align} $$

all equations taken . But since is an -cycle, , so we have $$a_i \equiv 2^na_i + v\pmod R$$ or equivalently $$\big(2^n-1\big)a_i + v \equiv 0\pmod R\tag{$\star$}$$ where depends only on the values of the carry bits —the are precisely the binary digits of .

Specifying a particular value of and that satisfy this equation completely determines all the . For example, is a solution when because , and this solution allows us to compute

$$\def\db#1{\color{darkblue}{#1}}\begin{align} a_0&&&=2\\ a_{P(0)} &= 2a_0 &+ \db0 &= 4\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 0 \\ a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 1\\ \hline a_{P^4(0)} &= 2a_{P^3(0)} &+ \db0 &= 2\\ \end{align}$$

where the carry bits are visible in the third column, and all the sums are taken . Note that as promised. This derivation of the entire set of from a single one plus a choice of is crucial, so let's see one more example. Let's consider . Then we want to choose and so that where . One possible solution is . Then we can derive the other as follows:

$$\begin{align} a_0&&&=5\\ a_{P(0)} &= 2a_0 &+ \db1 &= 1\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 2 \\\hline a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 5\\ \end{align}$$

And again we have as required.

Since the bits of are used cyclically, not every pair of will yield a different solution. Rotating the bits of and pairing them with different choices of will yield the same cycle of digits starting from a different place. In the first example above, we had . If we were to take (which also solves ) we would get the same cycle of values of the but starting from instead of from , and similarly if we take or . So we can narrow down the solution set of by considering only the so-called bracelets of rather than all possible values. Two values of are considered equivalent as bracelets if one is a rotation of the other. When a set of -values are equivalent as bracelets, we need only consider one of them; the others will give the same cyclic sequence of digits, but starting in a different place. For , for example, the bracelets are and ; the sequences and being equivalent to , and so on.

#### Example

Let us take , so we want to find 3-digit numerals with property . According to we need where . There are 9 possible values for ; for each one there is at most one possible value of that makes the sum zero:

$$\pi \approx 3 $$

$$\begin{array}{rrr} a_i & 7a_i & v \\ \hline 0 & 0 & 0 \\ 1 & 7 & 2 \\ 2 & 14 & 4 \\ 3 & 21 & 6 \\ 4 & 28 & \\ 5 & 35 & 1 \\ 6 & 42 & 3 \\ 7 & 49 & 5 \\ 8 & 56 & 7 \\ \end{array} $$

(For there is no solution.) We may disregard the non-bracelet values of , as these will give us solutions that are the same as those given by bracelet values of . The bracelets are:

$$\begin{array}{rl} 000 & 0 \\ 001 & 1 \\ 011 & 3 \\ 111 & 7 \end{array}$$

so we may disregard the solutions exacpt when . Calculating the digit sequences from these four values of and the corresponding we find:

$$\begin{array}{ccl} a_0 & v & \text{digits} \\ \hline 0 & 0 & 000 \\ 5 & 1 & 512 \\ 6 & 3 & 637 \\ 8 & 7 & 888 \ \end{array} $$

(In the second line, for example, we have , so and .)

Any number of three digits, for which contains exactly the same three digits, in base 9, must therefore consist of exactly the digits or .

#### A warning

All the foregoing assumes that the permutation is *a single cycle*. In general, it may not be. Suppose we did an analysis like that above for and found that there was no possible digit set, other than the trivial set `00000`

, that satisfied the governing equation . This would *not* completely rule out a base-10 solution with 5 digits, because the analysis only rules out a *cyclic* set of digits. There could still be a solution where was a product of a and a -cycle, or a product of still smaller cycles.

Something like this occurs, for example, in the case. Solving the governing equation yields only four possible digit cycles, namely , and . But there are several additional solutions: and . These correspond to permutations with more than one cycle. In the case of , for example, exchanges the and the , and leaves the and the fixed.

For this reason we cannot rule out the possibility of an -digit solution without first considering all smaller .

#### The Large Equals Odd rule

When is even there is a simple condition we can use to rule out certain sets of digits from being single-cycle solutions. Recall that and . Let us agree that a digit is *large* if and *small* otherwise. That is, is large if, upon doubling, it causes a carry into the next column to the left.

Since , where the are carry bits, we see that, except for , the digit is odd precisely when there is a carry from the next column to the right, which occurs precisely when is large. Thus the number of odd digits among is equal to the number of large digits among . This leaves the digits and uncounted. But is never odd, since there is never a carry in the rightmost position, and is always small (since otherwise would have digits, which is not allowed). So the number of large digits in is exactly equal to the number of odd digits in . And since and have exactly the same digits, the number of large digits in is equal to the number of odd digits in . Observe that this is the case for our running example : there is one odd digit and one large digit (the 4).

When is odd the analogous condition is somewhat more complicated, but since the main case of interest is , we have the useful rule that:

For even, the number of odd digits in any solution is equal to the number of large digits in .

# Conditions on the order of digits in a solution

We have determined, using the above method, that the digits might form a base-9 numeral with property . Now we would like to arrange them into a base-9 numeral that actually does have that property. Again let us write and , with . Note that if , then (if there was a carry from the next column to the right) or (if there was no carry), but since is impossible, we must have and therefore must be small, since there is no carry into position . But since is also one of , and it cannot also be , it must be . This shows that the 1, unless it appears in the rightmost position, must be to the left of the ; it cannot be to the left of the . Similarly, if then , because is impossible, so the must be to the left of a large digit, which must be the . Similar reasoning produces no constraint on the position of the ; it could be to the left of a small digit (in which case it doubles to ) or a large digit (in which case it doubles to ). We can summarize these findings as follows:

$$\begin{array}{cl} \text{digit} & \text{to the left of} \\ \hline 1 & 1, 2, \text{end} \\ 2 & 5 \\ 5 & 1,2,5,\text{end} \end{array}$$

Here “end” means that the indicated digit could be the rightmost.

Furthermore, the left digit of must be small (or else there would be a carry in the leftmost place and would have 4 digits instead of 3) so it must be either 1 or 2. It is not hard to see from this table that the digits must be in the order or , and indeed, both of those numbers have the required property: , and .

This was a simple example, but in more complicated cases it is helpful to draw the order constraints as a graph. Suppose we draw a graph with one vertex for each digit, and one additional vertex to represent the end of the numeral. The graph has an edge from vertex to whenever can appear to the left of . Then the graph drawn for the table above looks like this:

A 3-digit numeral with property corresponds to a path in this graph that starts at one of the nonzero small digits (marked in blue), ends at the red node marked ‘end’, and visits each node exactly once. Such a path is called *hamiltonian*. Obviously, self-loops never occur in a hamiltonian path, so we will omit them from future diagrams.

Now we will consider the digit set , again base 9. An analysis similar to the foregoing allows us to construct the following graph:

Here it is immediately clear that the only hamiltonian path is , and indeed, .

In general there might be multiple instances of a digit, and so multiple nodes labeled with that digit. Analysis of the case produces a graph with no legal start nodes and so no solutions, unless leading zeroes are allowed, in which case is a perfectly valid solution. Analysis of the case produces a graph with no path to the end node and so no solutions. These two trivial patterns appear for all and all , and we will ignore them from now on.

Returning to our ongoing example, in base 8, we see that and must double to and , so must be to the left of small digits, but and can double to either or and so could be to the left of anything. Here the constraints are so lax that the graph doesn't help us narrow them down much:

Observing that the only arrow into the 4 is from 0, so that the 4 must follow the 0, and that the entire number must begin with 1 or 2, we can enumerate the solutions:

1042 1204 2041 2104

If leading zeroes are allowed we have also:

0412 0421

All of these are solutions in base 8.

### The case of

Now we turn to our main problem, solutions in base 10.

To find *all* the solutions of length 6 requires an enumeration of smaller solutions, which, if they existed, might be concatenated into a solution of length 6. This is because our analysis of the digit sets that can appear in a solution assumes that the digits are permuted *cyclically*; that is, the permutations that we considered had only one cycle each.

There are no smaller solutions, but to prove that the length 6 solutions are minimal, we must analyze the cases for smaller and rule them out. We now produce a complete analysis of the base 10 case with and . For there is only the trivial solution of , which we disregard. (The question asked for a positive number anyway.)

For , we want to find solutions of where is a two-bit bracelet number, one of or . Tabulating the values of and that solve this equation we get:

$$\begin{array}{ccc} v& a_i \\ \hline 0 & 0 \\ 1& 3 \\ 3& 9 \\ \end{array}$$

We can disregard the and solutions because the former yields the trivial solution and the latter yields the nonsolution . So the only possibility we need to investigate further is , which corresponds to the digit sequence : Doubling gives us and doubling , plus a carry, gives us again.

But when we tabulate of which digits must be left of which informs us that there is no solution with just and , because the graph we get, once self-loops are eliminated, looks like this:

which obviously has no hamiltonian path. Thus there is no solution for .

For we need to solve the equation where is a bracelet number in , specifically one of or . Since and are relatively prime, for each there is a single that solves the equation. Tabulating the possible values of as before, and this time omitting rows with no solution, we have:

$$\begin{array}{rrl} v & a_i & \text{digits}\\ \hline 0& 0 & 000\\ 1& 7 & 748 \\ 3& 1 & 125\\ 7&9 & 999\\ \end{array}$$

The digit sequences and yield trivial solutions or nonsolutions as usual, and we will omit them in the future. The other two lines suggest the digit sets and , both of which fails the “odd equals large” rule.

This analysis rules out the possibility of a digit set with , but it does not *completely* rule out a 3-digit solution, since one could be obtained by concatenating a one-digit and a two-digit solution, or three one-digit solutions. However, we know by now that no one- or two-digit solutions exist. Therefore there are no 3-digit solutions in base 10.

For the governing equation is where is a 4-bit bracelet number, one of . This is a little more complicated because . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} a_i & 15a_i& v & \text{digits}\\ \hline 0 & 0 & 0 & 0000\\ 1 & 5 & 5 & 1250\\ 1 & 5 & 15 & 1375\\ 2 & 0 & 0 & 2486\\ 3 & 5 & 5 & 3749\\ 3 & 5 & 15 & 3751\\ 4 & 0 & 0 & 4862\\ 5 & 5 & 5 & 5012\\ 5 & 5 & 5 & 5137\\ 6 & 0 & 0 & 6248\\ 7 & 5 & 5 & 7493\\ 7 & 5 & 5 & 7513\\ 8 & 0 & 0 & 8624 \\ 9 & 5 & 5 & 9874\\ 9 & 5 & 15 & 9999 \\ \end{array}$$

where the second column has been reduced mod . Note that even restricting to bracelet numbers the table still contains duplicate digit sequences; the 15 entries on the right contain only the six basic sequences , and . Of these, only and obey the odd equals large criterion, and we will disregard and as usual, leaving only . We construct the corresponding graph for this digit set as follows: must double to , not , so must be left of a large number or . Similarly must be left of or . must also double to , so must be left of . Finally, must double to , so must be left of or the end of the numeral. The corresponding graph is:

which evidently has no hamiltonian path: whichever of 3 or 4 we start at, we cannot visit the other without passing through 7, and then we cannot reach the end node without passing through 7 a second time. So there is no solution with and .

We leave this case as an exercise. There are 8 solutions to the governing equation, all of which are ruled out by the odd equals large rule.

For the possible solutions are given by the governing equation where is a 6-bit bracelet number, one of . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} v & a_i & \text{digits}\\ \hline 0 & 0 & 000000\\ 1 & 3 & 362486 \\ 3 & 9 & 986249 \\ 5 & 5 & 500012 \\ 7 & 1 & 124875 \\ 9 & 7 & 748748 \\ 11 & 3 & 362501 \\ 13 & 9 & 986374 \\ 15 & 5 & 500137 \\ 21 & 3 & 363636 \\ 23 & 9 & 989899 \\ 27 & 1 & 125125 \\ 31 & 3 & 363751 \\ 63 & 9 & 999999 \\ \end{array}$$

After ignoring and as usual, the large equals odd rule allows us to ignore all the other sequences except and . The latter fails for the same reason that did when . But , the lone survivor, gives us a complicated derived graph containing many hamiltonian paths, every one of which is a solution to the problem:

It is not hard to pick out from this graph the minimal solution , for which , and also our old friend for which .

We see here the reason why all the small numbers with property contain the digits . The constraints on *which* digits can appear in a solution are quite strict, and rule out all other sequences of six digits and all shorter sequences. But once a set of digits passes these stringent conditions, the constraints on it are much looser, because is only required to have the digits of in *some* order, and there are many possible orders, many of which will satisfy the rather loose conditions involving the distribution of the carry bits. This graph is typical: it has a set of small nodes and a set of large nodes, and each node is connected to either *all* the small nodes or *all* the large nodes, so that the graph has many edges, and, as in this case, a largish clique of small nodes and a largish clique of large nodes, and as a result many hamiltonian paths.

### Onward

This analysis is tedious but is simple enough to perform by hand in under an hour. As increases further, enumerating the solutions of the governing equation becomes very time-consuming. I wrote a simple computer program to perform the analysis for given and , and to emit the possible digit sets that satisfied the large equals odd criterion. I had wondered if *every* base-10 solution contained equal numbers of the digits and . This is the case for (where the only admissible digit set is ), for (where the only admissible sets are and ), and for (where the only admissible sets are and ). But when we reach the increasing number of bracelets has loosened up the requirements a little and there are 5 admissible digit sets. I picked two of the promising-seeming ones and quickly found by hand the solutions and , both of which wreck any theory that the digits must all appear the same number of times.

### Acknowledgments

Thanks to Karl Kronenfeld for corrections and many helpful suggestions.

[This article was published last month on the math.stackexchange blog, which seems to have died young, despite many earnest-sounding promises beforehand from people who claimed they would contribute material. I am repatriating it here.]

A recent question on math.stackexchange asks for the smallest positive integer for which the number has the same decimal digits in some other order.

Math geeks may immediately realize that has this property, because it is the first 6 digits of the decimal expansion of , and the cyclic behavior of the decimal expansion of is well-known. But is this the *minimal* solution? It is not. Brute-force enumeration of the solutions quickly reveals that there are 12 solutions of 6 digits each, all permutations of , and that larger solutions, such as 1025874 and 1257489 seem to follow a similar pattern. What is happening here?

Stuck in Dallas-Fort Worth airport one weekend, I did some work on the problem, and although I wasn't able to solve it completely, I made significant progress. I found a method that allows one to hand-calculate that there is no solution with fewer than six digits, and to enumerate all the solutions with 6 digits, including the minimal one. I found an explanation for the surprising behavior that solutions tend to be permutations of one another. The short form of the explanation is that there are fairly strict conditions on which *sets* of digits can appear in a solution of the problem. But once the set of digits is chosen, the conditions on that *order* of the digits in the solution are fairly lax.

So one typically sees, not only in base 10 but in other bases, that the solutions to this problem fall into a few classes that are all permutations of one another; this is exactly what happens in base 10 where all the 6-digit solutions are permutations of . As the number of digits is allowed to increase, the strict first set of conditions relaxes a little, and other digit groups appear as solutions.

### Notation

The property of interest, , is that the numbers and have exactly the same base- digits. We would like to find numbers having property for various , and we are most interested in . Suppose is an -digit numeral having property ; let the (base-) digits of be and similarly the digits of are . The reader is encouraged to keep in mind the simple example of which we will bring up from time to time.

Since the digits of and are the same, in a different order, we may say that for some permutation . In general might have more than one cycle, but we will suppose that is a single cycle. All the following discussion of will apply to the individual cycles of in the case that is a product of two or more cycles. For our example of , we have in cycle notation. We won't need to worry about the details of , except to note that completely exhaust the indices , and that because is an -cycle.

### Conditions on the set of digits in a solution

For each we have $$a_{P(i)} = b_{i} \equiv 2a_{i} + c_i\pmod R $$ where the ‘carry bit’ is either 0 or 1 and depends on whether there was a carry when doubling . (When we are in the rightmost position and there is never a carry, so .) We can then write:

$$\begin{align} a_{P(P(i))} &= 2a_{P(i)} + c_{P(i)} \\ &= 2(2a_{i} + c_i) + c_{P(i)} &&= 4a_i + 2c_i + c_{P(i)}\\ a_{P(P(P(i)))} &= 2(4a_i + 2c_i + c_{P(P(i)})) + c_{P(i)} &&= 8a_i + 4c_i + 2c_{P(i)} + c_{P(P(i))}\\ &&&\vdots\\ a_{P^n(i)} &&&= 2^na_i + v \end{align} $$

all equations taken . But since is an -cycle, , so we have $$a_i \equiv 2^na_i + v\pmod R$$ or equivalently $$\big(2^n-1\big)a_i + v \equiv 0\pmod R\tag{$\star$}$$ where depends only on the values of the carry bits —the are precisely the binary digits of .

Specifying a particular value of and that satisfy this equation completely determines all the . For example, is a solution when because , and this solution allows us to compute

$$\def\db#1{\color{darkblue}{#1}}\begin{align} a_0&&&=2\\ a_{P(0)} &= 2a_0 &+ \db0 &= 4\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 0 \\ a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 1\\ \hline a_{P^4(0)} &= 2a_{P^3(0)} &+ \db0 &= 2\\ \end{align}$$

where the carry bits are visible in the third column, and all the sums are taken . Note that as promised. This derivation of the entire set of from a single one plus a choice of is crucial, so let's see one more example. Let's consider . Then we want to choose and so that where . One possible solution is . Then we can derive the other as follows:

$$\begin{align} a_0&&&=5\\ a_{P(0)} &= 2a_0 &+ \db1 &= 1\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 2 \\\hline a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 5\\ \end{align}$$

And again we have as required.

Since the bits of are used cyclically, not every pair of will yield a different solution. Rotating the bits of and pairing them with different choices of will yield the same cycle of digits starting from a different place. In the first example above, we had . If we were to take (which also solves ) we would get the same cycle of values of the but starting from instead of from , and similarly if we take or . So we can narrow down the solution set of by considering only the so-called bracelets of rather than all possible values. Two values of are considered equivalent as bracelets if one is a rotation of the other. When a set of -values are equivalent as bracelets, we need only consider one of them; the others will give the same cyclic sequence of digits, but starting in a different place. For , for example, the bracelets are and ; the sequences and being equivalent to , and so on.

#### Example

Let us take , so we want to find 3-digit numerals with property . According to we need where . There are 9 possible values for ; for each one there is at most one possible value of that makes the sum zero:

$$\pi \approx 3 $$

$$\begin{array}{rrr} a_i & 7a_i & v \\ \hline 0 & 0 & 0 \\ 1 & 7 & 2 \\ 2 & 14 & 4 \\ 3 & 21 & 6 \\ 4 & 28 & \\ 5 & 35 & 1 \\ 6 & 42 & 3 \\ 7 & 49 & 5 \\ 8 & 56 & 7 \\ \end{array} $$

(For there is no solution.) We may disregard the non-bracelet values of , as these will give us solutions that are the same as those given by bracelet values of . The bracelets are:

$$\begin{array}{rl} 000 & 0 \\ 001 & 1 \\ 011 & 3 \\ 111 & 7 \end{array}$$

so we may disregard the solutions exacpt when . Calculating the digit sequences from these four values of and the corresponding we find:

$$\begin{array}{ccl} a_0 & v & \text{digits} \\ \hline 0 & 0 & 000 \\ 5 & 1 & 512 \\ 6 & 3 & 637 \\ 8 & 7 & 888 \ \end{array} $$

(In the second line, for example, we have , so and .)

Any number of three digits, for which contains exactly the same three digits, in base 9, must therefore consist of exactly the digits or .

#### A warning

All the foregoing assumes that the permutation is *a single cycle*. In general, it may not be. Suppose we did an analysis like that above for and found that there was no possible digit set, other than the trivial set `00000`

, that satisfied the governing equation . This would *not* completely rule out a base-10 solution with 5 digits, because the analysis only rules out a *cyclic* set of digits. There could still be a solution where was a product of a and a -cycle, or a product of still smaller cycles.

Something like this occurs, for example, in the case. Solving the governing equation yields only four possible digit cycles, namely , and . But there are several additional solutions: and . These correspond to permutations with more than one cycle. In the case of , for example, exchanges the and the , and leaves the and the fixed.

For this reason we cannot rule out the possibility of an -digit solution without first considering all smaller .

#### The Large Equals Odd rule

When is even there is a simple condition we can use to rule out certain sets of digits from being single-cycle solutions. Recall that and . Let us agree that a digit is *large* if and *small* otherwise. That is, is large if, upon doubling, it causes a carry into the next column to the left.

Since , where the are carry bits, we see that, except for , the digit is odd precisely when there is a carry from the next column to the right, which occurs precisely when is large. Thus the number of odd digits among is equal to the number of large digits among .

This leaves the digits and uncounted. But is never odd, since there is never a carry in the rightmost position, and is always small (since otherwise would have digits, which is not allowed). So the number of large digits in is exactly equal to the number of odd digits in . And since and have exactly the same digits, the number of large digits in is equal to the number of odd digits in . Observe that this is the case for our running example : there is one odd digit and one large digit (the 4).

When is odd the analogous condition is somewhat more complicated, but since the main case of interest is , we have the useful rule that:

For even, the number of odd digits in any solution is equal to the number of large digits in .

# Conditions on the order of digits in a solution

We have determined, using the above method, that the digits might form a base-9 numeral with property . Now we would like to arrange them into a base-9 numeral that actually does have that property. Again let us write and , with . Note that if , then (if there was a carry from the next column to the right) or (if there was no carry), but since is impossible, we must have and therefore must be small, since there is no carry into position . But since is also one of , and it cannot also be , it must be . This shows that the 1, unless it appears in the rightmost position, must be to the left of the ; it cannot be to the left of the . Similarly, if then , because is impossible, so the must be to the left of a large digit, which must be the . Similar reasoning produces no constraint on the position of the ; it could be to the left of a small digit (in which case it doubles to ) or a large digit (in which case it doubles to ). We can summarize these findings as follows:

$$\begin{array}{cl} \text{digit} & \text{to the left of} \\ \hline 1 & 1, 2, \text{end} \\ 2 & 5 \\ 5 & 1,2,5,\text{end} \end{array}$$

Here “end” means that the indicated digit could be the rightmost.

Furthermore, the left digit of must be small (or else there would be a carry in the leftmost place and would have 4 digits instead of 3) so it must be either 1 or 2. It is not hard to see from this table that the digits must be in the order or , and indeed, both of those numbers have the required property: , and .

This was a simple example, but in more complicated cases it is helpful to draw the order constraints as a graph. Suppose we draw a graph with one vertex for each digit, and one additional vertex to represent the end of the numeral. The graph has an edge from vertex to whenever can appear to the left of . Then the graph drawn for the table above looks like this:

A 3-digit numeral with property corresponds to a path in this graph that starts at one of the nonzero small digits (marked in blue), ends at the red node marked ‘end’, and visits each node exactly once. Such a path is called *hamiltonian*. Obviously, self-loops never occur in a hamiltonian path, so we will omit them from future diagrams.

Now we will consider the digit set , again base 9. An analysis similar to the foregoing allows us to construct the following graph:

Here it is immediately clear that the only hamiltonian path is , and indeed, .

In general there might be multiple instances of a digit, and so multiple nodes labeled with that digit. Analysis of the case produces a graph with no legal start nodes and so no solutions, unless leading zeroes are allowed, in which case is a perfectly valid solution. Analysis of the case produces a graph with no path to the end node and so no solutions. These two trivial patterns appear for all and all , and we will ignore them from now on.

Returning to our ongoing example, in base 8, we see that and must double to and , so must be to the left of small digits, but and can double to either or and so could be to the left of anything. Here the constraints are so lax that the graph doesn't help us narrow them down much:

Observing that the only arrow into the 4 is from 0, so that the 4 must follow the 0, and that the entire number must begin with 1 or 2, we can enumerate the solutions:

1042 1204 2041 2104

If leading zeroes are allowed we have also:

0412 0421

All of these are solutions in base 8.

### The case of

Now we turn to our main problem, solutions in base 10.

To find *all* the solutions of length 6 requires an enumeration of smaller solutions, which, if they existed, might be concatenated into a solution of length 6. This is because our analysis of the digit sets that can appear in a solution assumes that the digits are permuted *cyclically*; that is, the permutations that we considered had only one cycle each. If we perform the analy

There are no smaller solutions, but to prove that the length 6 solutions are minimal, we must analyze the cases for smaller and rule them out. We now produce a complete analysis of the base 10 case with and . For there is only the trivial solution of , which we disregard. (The question asked for a positive number anyway.)

For , we want to find solutions of where is a two-bit bracelet number, one of or . Tabulating the values of and that solve this equation we get:

$$\begin{array}{ccc} v& a_i \\ \hline 0 & 0 \\ 1& 3 \\ 3& 9 \\ \end{array}$$

We can disregard the and solutions because the former yields the trivial solution and the latter yields the nonsolution . So the only possibility we need to investigate further is , which corresponds to the digit sequence : Doubling gives us and doubling , plus a carry, gives us again.

But when we tabulate of which digits must be left of which informs us that there is no solution with just and , because the graph we get, once self-loops are eliminated, looks like this:

which obviously has no hamiltonian path. Thus there is no solution for .

For we need to solve the equation where is a bracelet number in , specifically one of or . Since and are relatively prime, for each there is a single that solves the equation. Tabulating the possible values of as before, and this time omitting rows with no solution, we have:

$$\begin{array}{rrl} v & a_i & \text{digits}\\ \hline 0& 0 & 000\\ 1& 7 & 748 \\ 3& 1 & 125\\ 7&9 & 999\\ \end{array}$$

The digit sequences and yield trivial solutions or nonsolutions as usual, and we will omit them in the future. The other two lines suggest the digit sets and , both of which fails the “odd equals large” rule.

This analysis rules out the possibility of a digit set with , but it does not *completely* rule out a 3-digit solution, since one could be obtained by concatenating a one-digit and a two-digit solution, or three one-digit solutions. However, we know by now that no one- or two-digit solutions exist. Therefore there are no 3-digit solutions in base 10.

For the governing equation is where is a 4-bit bracelet number, one of . This is a little more complicated because . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} a_i & 15a_i& v & \text{digits}\\ \hline 0 & 0 & 0 & 0000\\ 1 & 5 & 5 & 1250\\ 1 & 5 & 15 & 1375\\ 2 & 0 & 0 & 2486\\ 3 & 5 & 5 & 3749\\ 3 & 5 & 15 & 3751\\ 4 & 0 & 0 & 4862\\ 5 & 5 & 5 & 5012\\ 5 & 5 & 5 & 5137\\ 6 & 0 & 0 & 6248\\ 7 & 5 & 5 & 7493\\ 7 & 5 & 5 & 7513\\ 8 & 0 & 0 & 8624 \\ 9 & 5 & 5 & 9874\\ 9 & 5 & 15 & 9999 \\ \end{array}$$

where the second column has been reduced mod . Note that even restricting to bracelet numbers the table still contains duplicate digit sequences; the 15 entries on the right contain only the six basic sequences , and . Of these, only and obey the odd equals large criterion, and we will disregard and as usual, leaving only . We construct the corresponding graph for this digit set as follows: must double to , not , so must be left of a large number or . Similarly must be left of or . must also double to , so must be left of . Finally, must double to , so must be left of or the end of the numeral. The corresponding graph is:

which evidently has no hamiltonian path: whichever of 3 or 4 we start at, we cannot visit the other without passing through 7, and then we cannot reach the end node without passing through 7 a second time. So there is no solution with and .

We leave this case as an exercise. There are 8 solutions to the governing equation, all of which are ruled out by the odd equals large rule.

For the possible solutions are given by the governing equation where is a 6-bit bracelet number, one of . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} v & a_i & \text{digits}\\ \hline 0 & 0 & 000000\\ 1 & 3 & 362486 \\ 3 & 9 & 986249 \\ 5 & 5 & 500012 \\ 7 & 1 & 124875 \\ 9 & 7 & 748748 \\ 11 & 3 & 362501 \\ 13 & 9 & 986374 \\ 15 & 5 & 500137 \\ 21 & 3 & 363636 \\ 23 & 9 & 989899 \\ 27 & 1 & 125125 \\ 31 & 3 & 363751 \\ 63 & 9 & 999999 \\ \end{array}$$

After ignoring and as usual, the large equals odd rule allows us to ignore all the other sequences except and . The latter fails for the same reason that did when . But , the lone survivor, gives us a complicated derived graph containing many hamiltonian paths, every one of which is a solution to the problem:

It is not hard to pick out from this graph the minimal solution , for which , and also our old friend for which .

We see here the reason why all the small numbers with property contain the digits . The constraints on *which* digits can appear in a solution are quite strict, and rule out all other sequences of six digits and all shorter sequences. But once a set of digits passes these stringent conditions, the constraints on it are much looser, because is only required to have the digits of in *some* order, and there are many possible orders, many of which will satisfy the rather loose conditions involving the distribution of the carry bits. This graph is typical: it has a set of small nodes and a set of large nodes, and each node is connected to either *all* the small nodes or *all* the large nodes, so that the graph has many edges, and, as in this case, a largish clique of small nodes and a largish clique of large nodes, and as a result many hamiltonian paths.

### Onward

This analysis is tedious but is simple enough to perform by hand in under an hour. As increases further, enumerating the solutions of the governing equation becomes very time-consuming. I wrote a simple computer program to perform the analysis for given and , and to emit the possible digit sets that satisfied the large equals odd criterion. I had wondered if *every* base-10 solution contained equal numbers of the digits and . This is the case for (where the only admissible digit set is ), for (where the only admissible sets are and ), and for (where the only admissible sets are and ). But when we reach the increasing number of bracelets has loosened up the requirements a little and there are 5 admissible digit sets. I picked two of the promising-seeming ones and quickly found by hand the solutions and , both of which wreck any theory that the digits must all appear the same number of times.

### Acknowledgments

Thanks to Karl Kronenfeld for corrections and many helpful suggestions.

As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this:

Then you sew four of these patches into a four-patch, say like this:

Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this:

It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72:

It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity.

My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center.

Some have a pinwheel like this:

which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond:

I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar.

The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components.

Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:

Here are two of the 12 ways to orient the patches in a four-patch:

Each 16-patch is made of four four-patches, and you must imagine that the
four-patches shown above are in the *upper-left* position in the
16-patch. Then symmetry of the 16-patch block means that triangles with the
same label are in positions that are symmetric with respect to the
entire block. For example, the two triangles labeled `b`

are on
opposite sides of the block's northwest-southeast diagonal. But there
is no symmetry of the full 16-patch block that carries triangle `d`

to
triangle `g`

, because `d`

is on the edge of the block, while `g`

is in the interior.

Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring.

A block might, of course, have patches in both orientations:

All the blocks with diagonals oriented this way are assigned
descriptors made from the letters `bbdefgii`

.

Once you have chosen one of the 12 ways to orient the diagonals in the
four-patch, you still have to color the patches. A descriptor like
`bbeeffii`

describes the orientation of the diagonal lines in the
squares, but it does not describe the way the four patches are
colored; there are between 4 and 8 ways to color each sort of
four-patch. For example, the `bbeeffii`

four-patch shown earlier can be colored in six different ways:

In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.)

We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:

`b/bf/ee/fi/i`

because there's an isolated white `b`

triangle, then a black parallelogram
made of a `b`

and an `f`

patch, then a white triangle made from the
two white `e`

triangles, then another parallelogram made from the black `f`

and `i`

, and finally in the middle, the white `i`

. (The two white `e`

triangles appear to be separated, but when four of these four-patches are
joined into a 16-patch block, the two white `e`

patches will be
adjacent and will form a single large triangle: )

The other five `bbeeffii`

four-patches are, in the same order they are shown above:

```
b/b/e/e/f/f/i/i
b/b/e/e/fi/fi
b/bfi/ee/f/i
bfi/bfi/e/e
bf/bf/e/e/i/i
```

All six have `bbeeffii`

, but grouped differently depending on the
colorings. The second one ( `b/b/e/e/f/f/i/i`

) has no regions with
more than one triangle; the fifth (
`bfi/bfi/e/e`

) has two large regions of three triangles each, and two
isolated triangles. In the latter four-patch, the `bfi`

in the
descriptor has three letters because the patch has a corresponding
distinctive component made of three triangles.

I made up a list of the descriptors for all 72 blocks; I think I did
this by hand. (The work directory contains a `blocks`

file that maps
blocks to their descriptors, but the
`Makefile`

does
not say how to build it, suggesting that it was not automatically
built.) From this list one can automatically
extract a
list of descriptors of interesting
shapes: an
interesting shape is two or more letters that appear together in some
descriptor. (Or it can be the single letter `j`

, which is
exceptional; see below.) For example, `bffh`

represents a distinctive
component. It can only occur in a patch that has a `b`

, two `f`

s, and
an `h`

, like this one:

and it will only be significant if the `b`

, the two `f`

s, and the `h`

are
the same color:

in which case you get this distinctive and interesting-looking hook component.

There is only one block that includes this distinctive hook component;
it has descriptor `b/bffh/ee/j`

, and looks like this: . But some of the distinctive
components are more common. The `ee`

component represents the large white half-diamonds on the four sides.
A block with "ee" in its descriptor always looks like this:

and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this:

(The stippled areas vary from block to block, but the blocks with `ee`

in their descriptors always have the half-diamonds as shown.)

The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html
all have the `ee`

component. There are many differences between them, but
they all have the half-diamonds in common.

Other distinctive components have similar short descriptors. The two pinwheels I
mentioned above are
`gh`

and
`fi`

, respectively; if you look at
the list of `gh`

blocks
and
the list of `fi`

blocks
you'll see all the blocks with each kind of pinwheel.

Descriptor `j`

is an exception. It makes an interesting shape all by itself,
because any block whose patches have `j`

in their descriptor will have
a distinctive-looking diamond component in the center. The four-patch looks like
this:

so the full sixteen-patch looks like this:

where the stippled parts can vary. A look at the list of blocks with
component
`j`

will
confirm that they all have this basic similarity.

I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar.

Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar.

This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it.

The complete final results were:

- This tabulation of blocks that are somewhat similar
- This tabulation of blocks that are distinctly similar (This is
*the*final product; I consider this a sufficiently definitive listing of “similar blocks”.) - This tabulation of blocks that are extremely similar

And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j

It may also be interesting to browse the work directory.

Earlier this week I gave a talk about the Curry-Howard
isomorphism. Talks never go quite
the way you expect. The biggest sticking point was my assertion that
there is no function with the type *a* → *b*. I mentioned this as a
throwaway remark on slide 7, assuming that everyone would agree
instantly, and then we got totally hung up on it for about twenty
minutes.

Part of this was my surprise at discovering that most of the audience (members of the Philly Lambda functional programming group) was not familiar with the Haskell type system. I had assumed that most of the members of a functional programming interest group would be familiar with one of Haskell, ML, or Scala, all of which have the same basic type system. But this was not the case. (Many people are primarily interested in Scheme, for example.)

I think the main problem was that I did not make clear to the audience
what Haskell means when it says that a function has type *a* → *b*. At the
talk, and then later on
Reddit
people asked

what about a function that takes an integer and returns a string: doesn't it have type

a→b?

If you know one of the HM languages, you know that of course it
doesn't; it has type `Int → String`

, which is not the same at all. But I
wasn't prepared for this confusion and it took me a while to formulate
the answer. I think I underestimated the degree to which I have
internalized the behavior of Hindley-Milner type systems after twenty
years. Next time, I will be better prepared, and will say something
like the following:

A function which takes an integer and returns a string does not have
the type *a* → *b*; it has the type Int → String. You must pass it an
integer, and you may only use its return value in a place that makes
sense for a string. If *f* has this type, then ```
3 +
f 4
```

is a compile-time type error because Haskell knows that *f*
returns a string, and strings do not work with `+`

.

But if *f* had
the type *a* → *b*, then `3 + f 4`

would be legal, because context requires that
*f* return a number, and the type *a* → *b* says that it *can* return a
number, because a number is an instance of the completely general type
*b*. The type *a* → *b*, in contrast to Int → String, means that *b*
and *a* are completely unconstrained.

Say function *f* had type *a* → *b*. Then you would be able to use the
expression `f x`

in any context that was expecting any sort of return
value; you could write any or all of:

```
3 + f x
head(f x)
"foo" ++ f x
True && f x
```

and they would all type check correctly, regardless of the type of
*x*. In the first line, *f x* would return a number; in the second
line *f* would return a list; in the third line it would return a
string, and in the fourth line it would return a boolean. And in each
case *f* could be able to do what was required regardless of the type
of *x*, so without even looking at *x*. But how could you possibly
write such a function *f*? You can't; it's impossible.

Contrast this with the identity function `id`

, which has type *a* → *a*. This says that `id`

always returns a value whose type is the same as
that if its argument. So you can write

```
3 + id x
```

as long as *x* has the right type for `+`

, and you can write

```
head(id x)
```

as long as `x`

has the
right type for `head`

, and so on. But for *f* to have the type *a* → *b*, all those
would have to work regardless of the type of the argument to *f*. And
there is no way to write such an *f*.

Actually I wonder now if part of the problem is that we like to write
*a* → *b* when what we really mean is the type ∀a.∀b.*a* → *b*. Perhaps making
the quantifiers explicit would clear things up? I suppose it probably
wouldn't have, at least in this case.

The issue is a bit complicated by the fact that the function

```
loop :: a -> b
loop x = loop x
```

*does* have the type *a* → *b*, and, in a language with exceptions, `throw`

has that type also; or consider Haskell

```
foo :: a -> b
foo x = undefined
```

Unfortunately, just as I thought I was getting across the explanation
of why there can be no function with type *a* → *b*, someone brought up
exceptions and I had to mutter and look at my shoes. (You can also take
the view that these functions have type *a* → ⊥, but the logical
principle ⊥ → *b* is unexceptionable.)

In fact, experienced practitioners will realize, the instant the type
*a* → *b* appears, that they have written a function that never returns.
Such an example was directly responsible for my own initial interest
in functional programming and type systems; I read a 1992 paper (“An
anecdote about ML type
inference”)
by Andrew R. Koenig in which he described writing a merge sort
function, whose type was reported (by the SML type inferencer) as ```
[a]
-> [b]
```

, and the reason was that it had a bug that would cause it to
loop forever on any nonempty list. I came back from that conference
convinced that I must learn ML, and *Higher-Order Perl* was a direct
(although distant) outcome of that conviction.

Any discussion of the Curry-Howard isomorphism, using Haskell as an example, is somewhat fraught with trouble, because Haskell's type logic is utterly inconsistent. In addition to the examples above, in Haskell one can write

```
fix :: (a -> a) -> a
fix f = let x = fix f
in f x
```

and as a statement of logic, is patently false. This might be an argument in favor of the Total Functional Programming suggested by D.A. Turner and others.

As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this:

Then you sew four of these patches into a four-patch, say like this:

Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this:

It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72:

It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity.

My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center.

Some have a pinwheel like this:

which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond:

I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar.

The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components.

Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:

Here are two of the 12 ways to orient the patches in a four-patch:

Each 16-patch is made of four four-patches, and you must imagine that the
four-patches shown above are in the *upper-left* position in the
16-patch. Then symmetry of the 16-patch block means that triangles with the
same label are in positions that are symmetric with respect to the
entire block. For example, the two triangles labeled `b`

are on
opposite sides of the block's northwest-southeast diagonal. But there
is no symmetry of the full 16-patch block that carries triangle `d`

to
triangle `g`

, because `d`

is on the edge of the block, while `g`

is in the interior.

Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring.

A block might, of course, have patches in both orientations:

All the blocks with diagonals oriented this way are assigned
descriptors made from the letters `bbdefgii`

.

Once you have chosen one of the 12 ways to orient the diagonals in the
four-patch, you still have to color the patches. A descriptor like
`bbeeffii`

describes the orientation of the diagonal lines in the
squares, but it does not describe the way the four patches are
colored; there are between 4 and 8 ways to color each sort of
four-patch. For example, the `bbeeffii`

four-patch shown earlier can be colored in six different ways:

In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.)

We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:

`b/bf/ee/fi/i`

because there's an isolated white `b`

triangle, then a black parallelogram
made of a `b`

and an `f`

patch, then a white triangle made from the
two white `e`

triangles then another parallelogram made from the black `f`

and `i`

, and finally in the middle, the white `i`

. (The two white `e`

triangles appear to be separated, but when four of these four-patches are
joined into a 16-patch block, the two white `e`

patches will be
adjacent and will form a single large triangle: )

The other five `bbeeffii`

blocks are, in the same order they are shown above:

```
b/b/e/e/f/f/i/i
b/b/e/e/fi/fi
b/bfi/ee/f/i
bfi/bfi/e/e
bf/bf/e/e/i/i
```

All six have `bbeeffii`

, but grouped differently depending on the
colorings. The second one ( `b/b/e/e/f/f/i/i`

) has no regions with
more than one triangle; the fifth (
`bfi/bfi/e/e`

) has two large regions of three triangles each, and two
isolated triangles. In the latter four-patch, the `bfi`

in the
descriptor has three letters because the patch has a corresponding
distinctive component made of three triangles.

I made up a list of the descriptors for all 72 blocks; I think I did
this by hand. (The work directory contains a `blocks`

file that maps
blocks to their descriptors, but the
`Makefile`

does
not say how to build it, suggesting that it was not automatically
built.) From this list one can automatically
extract a
list of descriptors of interesting
shapes: an
interesting shape is two or more letters that appear together in some
descriptor. (Or it can be the single letter `j`

, which is
exceptional; see below.) For example, `bffh`

represents a distinctive
component. It can only occur in a patch that has a `b`

, two `f`

s, and
an `h`

, like this one:

and it will only be significant if the `b`

, the two `f`

s, and the `h`

are
the same color:

in which case you get this distinctive and interesting-looking hook component.

There is only one block that includes this distinctive hook component;
it has descriptor `b/bffh/ee/j`

, and looks like this: . But some of the distinctive
components are more common. The `ee`

component represents the large white half-diamonds on the four sides.
A block with "ee" in its descriptor always looks like this:

and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this:

(The stippled areas vary from block to block, but the blocks with `ee`

in their descriptors always have the half-diamonds as shown.)

The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html
all have the `ee`

component. There are many differences between them, but
they all have the half-diamonds in common.

Other distinctive components have similar short descriptors. The two pinwheels I
mentioned above are
`gh`

and
`fi`

, respectively; if you look at
the list of `gh`

blocks
and
the list of `fi`

blocks
you'll see all the blocks with each kind of pinwheel.

Descriptor `j`

is an exception. It makes an interesting shape all by itself,
because any block whose patches have `j`

in their descriptor will have
a distinctive-looking diamond component in the center. The four-patch looks like
this:

so the full sixteen-patch looks like this:

where the stippled parts can vary. A look at the list of blocks with
component
`j`

will
confirm that they all have this basic similarity.

I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar.

Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar.

This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved calculating the number of differences between two blocks by comparing their patches pairwise.) I was satisfied enough with this that I considered the project a success and stopped work on it.

The complete final results were:

- This tabulation of blocks that are somewhat similar
- This tabulation of blocks that are distinctly similar (This is
*the*final product; I consider this a sufficiently definitive listing of “similar blocks”.) - This tabulation of blocks that are extremely similar

And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j

It may also be interesting to browse the work directory.

A few weeks ago I asked people to predict, without trying it first, what this would print:

```
perl -le 'print(two + two == five ? "true" : "false")'
```

(If you haven't seen this yet, I recommend that you guess, and then test your guess, before reading the rest of this article.)

People familiar with Perl guess that it will print `true`

; that is
what I guessed. The reasoning is as follows: Perl is willing to treat
the unquoted strings `two`

and `five`

as strings, as if they had been
quoted, and is also happy to use the `+`

and `==`

operators on them,
converting the strings to numbers in its usual way. If the strings
had looked like `"2"`

and `"5"`

Perl would have treated them as 2 and
5, but as they don't look like decimal numerals, Perl interprets them
as zeroes. (Perl wants to issue a warning about this, but the warning is not enabled by default.
Since the `two`

and `five`

are treated as
zeroes, the result of the `==`

comparison are true, and the string
`"true"`

should be selected and printed.

So far this is a little bit odd, but not excessively odd; it's the
sort of thing you expect from programming languages, all of which more
or less suck. For example, Python's behavior, although different, is
about equally peculiar. Although Python does require that the strings
`two`

and `five`

be quoted, it is happy to do its own peculiar thing
with `"two" + "two" == "five"`

, which happens to be false: in Python
the `+`

operator is overloaded and has completely different behaviors
on strings and numbers, so that while in Perl `"2" + "2"`

is the
number 4, in Python is it is the string `22`

, and `"two" + "two"`

yields the string `"twotwo"`

. Had the program above actually printed
`true`

, as I expected it would, or even `false`

, I would not have
found it remarkable.

However, this is not what the program does do. The explanation of two paragraphs earlier is totally wrong. Instead, the program prints nothing, and the reason is incredibly convoluted and bizarre.

First, you must know that `print`

has an optional first argument. (I
have plans for an article about how optional first argmuents are almost
always a bad move, but contrary to my usual practice I will not insert
it here.) In Perl, the `print`

function can be invoked in two ways:

```
print HANDLE $a, $b, $c, …;
print $a, $b, $c, …;
```

The former prints out the list `$a, $b, $c, …`

to the filehandle
`HANDLE`

; the latter uses the default handle, which typically points
at the terminal. How does Perl decide which of these forms is being
used? Specifically, in the second form, how does it know that `$a`

is
one of the items to be printed, rather than a variable containing the filehandle
to print to?

The answer to this question is further complicated by the fact that
the `HANDLE`

in the first form could be either an unquoted string,
which is the name of the handle to print to, or it could be a variable
containing a filehandle value. Both of these `print`

s should do the
same thing:

```
my $handle = \*STDERR;
print STDERR $a, $b, $c;
print $handle $a, $b, $c;
```

Perl's method to decide whether a particular `print`

uses an explicit
or the default handle is a somewhat complicated heuristic. The basic
rule is that the filehandle, if present, can be distinguished because
its trailing comma is omitted. But if the filehandle were allowed to
be the result of an arbitrary expression, it might be difficult for
the parser to decide where there was a a comma; consider the
hypothetical expression:

```
print $a += EXPRESSION, $b $c, $d, $e;
```

Here the intention is that the `$a += EXPRESSION, $b`

expression
calculates the filehandle value (which is actually retrieved from `$b`

, the
`$a += …`

part being executed only for its side effect) and the
remaining `$c, $d, $e`

are the values to be printed. To allow this
sort of thing would be way too confusing to both Perl and to the
programmer. So there is the further rule that the filehandle
expression, if present, must be short, either a simple scalar
variable such as `$fh`

, or a bare unqoted string that is in the right
format for a filehandle name, such as `HANDLE``. Then the parser need
only peek ahead a token or two to see if there is an upcoming comma.

So for example, in

```
print STDERR $a, $b, $c;
```

the `print`

is immediately followed by `STDERR`

, which could be a
filehandle name, and `STDERR`

is not followed by a comma, so `STDERR`

is taken to be the name of the output handle. And in

```
print $x, $a, $b, $c;
```

the `print`

is immediately followed by the simple scalar value `$x`

,
but this `$x`

is followed by a comma, so is considered one of the
things to be printed, and the target of the `print`

is the default
output handle.

In

```
print STDERR, $a, $b, $c;
```

Perl has a puzzle: `STDERR`

looks like a filehandle, but it is
followed by a comma. This is a compile-time error; Perl complains “No
comma allowed after filehandle” and aborts. If you want to print the
literal string `STDERR`

, you must quote it, and if you want to print A, B,
and C to the standard error handle, you must omit the first comma.

Now we return the the original example.

```
perl -le 'print(two + two == five ? "true" : "false")'
```

Here Perl sees the unquoted string `two`

which could be a filehandle
name, and which is not followed by a comma. So it takes the first
`two`

to be the output handle name. Then it evaluates the expression

```
+ two == five ? "true" : "false"
```

and obtains the value `true`

. (The leading `+`

is a unary plus
operator, which is a no-op. The bare `two`

and `five`

are taken to be
string constants, which, compared with the numeric `==`

operator, are
considered to be numerically zero, eliciting the same warning that I
mentioned earlier that I had not enabled. Thus the comparison Perl
actually does is is 0 == 0, which is true, and the resulting string is
`true`

.)

This value, the string `true`

, is then printed to the filehandle named
`two`

. Had we previously opened such a filehandle, say with

```
open two, ">", "output-file";
```

then the output would have been sent to the filehandle as usual.
Printing to a non-open filehandle elicits an optional warning from
Perl, but as I mentioned, I have not enabled warnings, so the `print`

silently fails, yielding a false value.

Had I enabled those optional warnings, we would have seen a plethora of them:

```
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "five" may clash with future reserved word at -e line 1.
Name "main::two" used only once: possible typo at -e line 1.
Argument "five" isn't numeric in numeric eq (==) at -e line 1.
Argument "two" isn't numeric in numeric eq (==) at -e line 1.
print() on unopened filehandle two at -e line 1.
```

(The first four are compile-time warnings; the last three are issued
at execution time.) The crucial warning is the one at the end,
advising us that the output of `print`

was directed to the filehandle
`two`

which was never opened for output.

[ Addendum 20140718: I keep thinking of the following remark of Edsger W. Dijkstra:

[This phenomenon] takes one of two different forms: one programmer places a one-line program on the desk of another and … says, "Guess what it does!" From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz., to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language.

But my intent is different than what Dijkstra describes. His programmer is proud, but I am discgusted. Incidentally, I believe that Dijkstra was discussing APL here. ]

A few weeks ago I asked people to predict, without trying it first, what this would print:

```
perl -le 'print(two + two == five ? "true" : "false")'
```

(If you haven't seen this yet, I recommend that you guess, and then test your guess, before reading the rest of this article.)

People familiar with Perl guess that it will print `true`

; that is
what I guessed. The reasoning is as follows: Perl is willing to treat
the unquoted strings `two`

and `five`

as strings, as if they had been
quoted, and is also happy to use the `+`

and `==`

operators on them,
converting the strings to numbers in its usual way. If the strings
had looked like `"2"`

and `"5"`

Perl would have treated them as 2 and
5, but as they don't look like decimal numerals, Perl interprets them
as zeroes. (Perl wants to issue a warning about this, but the warning is not enabled by default.
Since the `two`

and `five`

are treated as
zeroes, the result of the `==`

comparison are true, and the string
`"true"`

should be selected and printed.

So far this is a little bit odd, but not excessively odd; it's the
sort of thing you expect from programming languages, all of which more
or less suck. For example, Python's behavior, although different, is
about equally peculiar. Although Python does require that the strings
`two`

and `five`

be quoted, it is happy to do its own peculiar thing
with `"two" + "two" == "five"`

, which happens to be false: in Python
the `+`

operator is overloaded and has completely different behaviors
on strings and numbers, so that while in Perl `"2" + "2"`

is the
number 4, in Python is it is the string `22`

, and `"two" + "two"`

yields the string `"twotwo"`

. Had the program above actually printed
`true`

, as I expected it would, or even `false`

, I would not have
found it remarkable.

However, this is not what the program does do. The explanation of two paragraphs earlier is totally wrong. Instead, the program prints nothing, and the reason is incredibly convoluted and bizarre.

First, you must know that `print`

has an optional first argument.

In Perl, the `print`

function can be invoked in two ways:

```
print HANDLE A, B, C, ...;
print A, B, C, ...;
```

The former prints out the list `A, B, C, ...`

to the filehandle
`HANDLE`

; the latter uses the default handle, which typically points
at the terminal. How does Perl decide which of these forms is being
used? Specifically, in the second form, how does it know that `A`

is
one of the items to be printed, rather than the name of the filehandle
to print to?

The answer to this question is further complicated by the fact that
the `HANDLE`

in the first form could be either an unquoted string,
which is the name of the handle to print to, or it could be a variable
containing a filehandle value. Both of these `print`

s should do the
same thing:

```
my $handle = \*STDERR;
print STDERR A, B, C;
print $handle A, B, C;
```

Perl's method to decide whether a particular print uses an explicit or
a default handle is a somewhat complicated heuristic. I *think* it is:

- Look at the first component after
`print`

. - If it is either
- A bare (unquoted) string that could be a filehandle name, or
- a simple scalar variable (such as
`$handle`

),

- and it is not followed by a comma
- then it is the optional filehandle

The comma is the important thing: if there's no comma after the first item it is a filehandle; if there is a comma, the first item is the beginning of the output items. The restriction that the filehandle argument must be a bare string or a simple scalar variable is so that the parser doesn't have to search forward to infinity looking for a comma that may or may not be there; it can look forward a couple of tokens and if the comma hasn't appeared yet Perl gives up.

So for example, in

```
print STDERR A, B, C;
```

the `print`

is immediately followed by `STDERR`

, which could be a
filehandle name, and `STDERR`

is not followed by a comma, so `STDERR`

is taken to be the name of the output handle. And in

```
print $x, A, B, C;
```

the `print`

is immediately followed by the simple scalar value `$x`

,
but this `$x`

is followed by a comma, so is considered one of the
things to be printed, and the target of the `print`

is the default
output handle.

In

```
print STDERR, A, B, C;
```

Perl has a puzzle: `STDERR`

looks like a filehandle, but it is
followed by a comma. This is a compile-time error; Perl complains “No
comma allowed after filehandle” and aborts. If you want to print the
literal string `STDERR`

, you must quote it; if you want to print A, B,
and C to the standard error handle, you must omit the first comma.

Now we return the the original example.

```
perl -le 'print(two + two == five ? "true" : "false")'
```

Here Perl sees the unquoted string `two`

which could be a filehandle
name, and which is not followed by a comma. So it takes the first
`two`

to be the output handle name. Then it evaluates the expression

```
+ two == five ? "true" : "false"
```

and obtains the value `true`

. (The leading `+`

is a unary plus
operator, which is a no-op. The bare `two`

and `five`

are taken to be
string constants, which, compared with the numeric `==`

operator, are
considered to be numerically zero, eliciting the same warning that I
mentioned earlier that I had not enabled. Thus the comparison Perl
actually does is is 0 == 0, which is true, and the resulting string is
`true`

.)

This value, the string `true`

, is then printed to the filehandle named
`two`

. Had we previously opened such a filehandle, say with

```
open two, ">", "output-file";
```

then the output would have been sent to the filehandle as usual.
Printing to a non-open filehandle elicits an optional warning from
Perl, but as I mentioned, I have not enabled warnings, so the `print`

silently fails, yielding a false value.

Had I enabled those optional warnings, we would have seen a plethora of them:

```
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "five" may clash with future reserved word at -e line 1.
Name "main::two" used only once: possible typo at -e line 1.
Argument "five" isn't numeric in numeric eq (==) at -e line 1.
Argument "two" isn't numeric in numeric eq (==) at -e line 1.
print() on unopened filehandle two at -e line 1.
```

The crucial warning is the one at the end, advising us that the output
of `print`

was directed to the filehandle `two`

which was never opened
for output.

[ Summary: I gave a talk Monday night on the Curry-Howard isomorphism; my talk slides are online. ]

I sent several proposals to !!con, a conference of ten-minute talks. One of my proposals was to explain the Curry-Howard isomorphism in ten minutes, but the conference people didn't accept it. They said that they had had four or five proposals for talks about the Curry-Howard isomorphism, and didn't want to accept more than one of them.

The CHI talk they did accept turned out to be very different from the one I had wanted to give; it discussed the Coq theorem-proving system. I had wanted to talk about the basic correspondence between pair types, union types, and function types on the one hand, and reasoning about logical conjunctions, disjunctions, and implications on the other hand, and the !!con speaker didn't touch on this at all.

But mathematical logic and programming language types turn out to be the same! A type in a language like Haskell can be understood as a statement of logic, and the statement will be true if and only if there is actually a value with the corresponding type. Moreover, if you have a proof of the statement, you can convert the proof into a value of the corresponding type, or conversely if you have a value with a certain type you can convert it into a proof of the corresponding statement. The programming language features for constructing or using values with function types correspond exactly to the logical methods for proving or using statements with implications; similarly pair types correspond to logical conjunction, and union tpyes to logical disjunction, and exceptions to logical negation. I think this is incredible. I was amazed the first time I heard of it (Chuck Liang told me sometime around 1993) and I'm still amazed.

Happily Philly Lambda, a Philadelphia-area functional programming group, had recently come back to life, so I suggested that I give them a longer talk about about the Curry-Howard isomorphism, and they agreed.

I gave the talk yesterday, and the materials are online. I'm not sure how easy they will be to understand without my commentary, but it might be worth a try.

If you're interested and want to look into it in more detail, I
suggest you check out Sørensen and Urzyczyn's *Lectures on the
Curry-Howard Isomorphism*. It was published as an expensive
yellow-cover book by Springer, but free copies of the draft are still
available.

Here's a Perl quiz that I confidently predict *nobody* will get right.
Without trying it first, what does the following program print?

```
perl -le 'print(two + two == five ? "true" : "false")'
```

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, written in 1969 by Oulipo
member Georges Perec, entirely without the use of the letter *e*.
Another possibly well-known example is the *Exercises in Style*
of Raymond Queneau, which retells the same vapid anecdote in 99
different styles. The book that Michael put in front of me to sign is
a compendium of anecdotes, examples of Oulipan work, and other
Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large (around five megabytes) that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The cross-references are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, explaining what I had done, how it could be used, and what could be done instead, and the result was a reply from Harry Mathews, another well-known member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

[ Addendum 20140428: Thanks to Philippe Bruhat for some corrections: neither Perec nor Mathews was a founding member of Oulipo. ]

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, written in 1969 by Oulipo
member Georges Perec, entirely without the use of the letter *e*.
Another possibly well-known example is the *Exercises in Style*
of Raymond Queneau, which retells the same vapid anecdote in 99
different styles. The book that Michael put in front of me to sign is
a compendium of anecdotes, examples of Oulipan work, and other
Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large (around five megabytes) that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The crossreferences are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, explaining what I had done, how it could be used, and what could be done instead, and the result was a reply from Harry Mathews, another well-known member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

[ Addendum 20140428: Thanks to Philippe Bruhat for some corrections: neither Perec nor Mathews was a founding member of Oulipo. ]

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, which is written entirely
without the use of the letter *e*. It was written in 1969 by Georges
Perec, a founding member of Oulipo. Another possible well-known
example is the *Exercises in Style* of Raymond Queneau. The book
that Michael put in front of me to sign is a compendium of
anecdotes, examples of Oulipan work, and other Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large enough, around five megabytes, that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The crossreferences are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, and the result was a reply from Harry Mathews, another founding member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained in detail what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to \bot)\to \lnot P$$

Here means an absurdity or a contradiction; means that assuming
leads to absurdity, and means that if assuming
leads to absurdity, then you can conclude that is false. This
is a classic proof by contradiction, and it is intuitionistically
valid. In fact, in many formulations of intuitionistic logic, is
*defined* to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to \bot)\to P$$

This says that if assuming leads to absurdity, you can conclude
that is true. This is *not* intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(\lnot P\to \bot)\to \lnot \lnot P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand
intuitionistic logic as talking about provability instead of truth.
In classical logic, means that is true and
means that is false. If is not false it is true, so and
mean the same thing. But in intuitionistic logic means that is
*provable*, and means that is not provable. means that it is
impossible to prove that is not provable.

If is provable, it is certainly impossible to prove that is not provable. So implies . But just because it is impossible to prove that there is no proof of does not mean that itself is provable, so does not imply .

Similarly,

$$(P\to \bot)\to \lnot P $$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(\lnot P\to \bot)\to P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to\bot)\to \lnot P$$

Here means an absurdity or a contradiction; means that assuming
leads to absurdity, and means that if assuming
leads to absurdity, then you can conclude that is false. This
is a classic proof by contradiction, and it is intuitionistically
valid. In fact, in many formulations of intuitionistic logic, is
*defined* to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to\bot)\to P$$

This says that if assuming leads to absurdity, you can conclude
that is true. This is *not* intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(\lnot P\to\bot)\to \lnot\lnot P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand
intuitionistic logic as talking about provability instead of truth.
In classical logic, P means that P is true and \lnot P
means that P is false. If P is not false it is true, so \lnot\lnot P and P
mean the same thing. But in intuitionistic logic P means that P is
*provable*, and \lnot P means that P is not provable. \lnot\lnot P means that it is
impossible to prove that P is not provable.

If P is provable, it is certainly impossible to prove that P is not provable. So P implies \lnot\lnot P. But just because it is impossible to prove that there is no proof of P does not mean that P itself is provable, so \lnot\lnot P does not imply P.

Similarly,

$$(P\to\bot)\to \lnot P $$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(\lnot P\to\bot)\to P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

You sometimes hear people claim that there is no perfectly efficient machine, that every machine wastes some of its input energy in noise or friction.

However, there is a counterexample. An electric space heater is perfectly efficient. Its purpose is to heat the space around it, and 100% of the input energy is applied to this purpose. Even the electrical energy lost to resistance in the cord you use to plug it into the wall is converted to heat.

Wait, you say, the space heater does waste some of its energy. The coils heat up, and they emit not only heat, but also light, which is useless, being a dull orange color. Ah! But what happens when that light hits the wall? Most of it is absorbed, and heats up the wall. Some is reflected, and heats up a different wall instead.

Similarly, a small fraction of the energy is wasted in making a quiet humming noise—until the sound waves are absorbed by the objects in the room, heating them slightly.

Now it's true that some heat is lost when it's radiated from the
*outside* of the walls and ceiling. But some is also lost whenever
you open a window or a door, and you can't blame the space heater for
your lousy insulation. It heated the room as much as possible under
the circumstances.

So remember this when you hear someone complain that incandescent light bulbs are wasteful of energy. They're only wasteful in warm weather. In cold weather, they're free.

This week there has been an article floating around about “What happens when placeholder text doesn't get replaced. This reminds me of the time I made this mistake myself.

In 1996 I was programming a web site for a large company which sold cosmetics and skin care products in hundreds of department stores and malls around the country. The technology to actually buy the stuff online wasn't really mature yet, and the web was new enough that the company was worried that selling online would anger the retail channels. They wanted an a web page where you would put in your location and it would tell you where the nearby stores were.

The application was simple; it accepted a city and state, looked them
up in an on-disk hash table, and then returned a status code to the
page generator. The status code was for internal use only. For
example, if you didn't fill in the form completely, the program would
return the status code `MISSING`

, which would trigger the templating
engine to build a page with a suitable complaint message.

If the form was filled out correctly, but there was no match in the
database, the program would return a status code that the front end
translated to a suitably apologetic message. The status code I
selected for this was `BACKWATER`

.

Which was all very jolly, until one day there was a program bug and
some user in Podunk, Iowa submitted the form and got back a page with
`BACKWATER`

in giant letters.

Anyone could have seen *that* coming; I have no excuse.

I wrote some time ago about Moonpig's use of GUIDs: every significant object was given a unique ID. I said that this was a useful strategy I had only learned from Rik, and I was surprised to see how many previously tricky programming problems became simpler once the GUIDs were available. Some of these tricky problems are artifacts of Perl's somewhat limited implementation of hashes; hash keys must be strings, and the GUID gives you an instantaneous answer to any question about what the keys should be.

But it reminds me of a similar maxim which I was thinking about just yesterday: Every table in a relational database should have a record ID field. It often happens that I am designing some table and there is no obvious need for such a field. I now always put one in anyway, having long ago learned that I will inevitably want it for something.

Most recently I was building a table to record which web pages were being currently visited by which users. A record in the table is naturally identified by the pair of user ID and page URL; it is not clear that it needs any further keys.

But I put in a record ID anyway, because my practice is to always put
in a record ID, and sure enough, within a few hours I was glad it
was there. The program I was writing has not yet needed to use the
record IDs. But to *test* the program I needed to insert and manipulate
some test records, and it was much easier to write this:

```
update table set ... where record_id = 113;
```

than this:

```
update table set ... where user_id = 97531 and url = 'http://hostname:port/long/path/that/is/hard/to/type';
```

If you ever end up with two objects in the program that represesent record sets and you need to merge or intersect them synthetically, having the record ID numbers automatically attached to the records makes this quite trivial, whereas if you don't have them it is a pain in the butt. You should never be in such a situation, perhaps, but stranger things have happened. Just yesterday I found myself writing

```
function relativize (pathPat) {
var dummyA = document.createElement('a');
dummyA.href = document.URL;
return "http://" + dummyA.host + pathPat;
}
```

which nobody should have to do either, and yet there I was. Sometimes programming can be a dirty business.

During the bootstrapping of the user-url table project some records
with bad URLs were inserted by buggy code, and I needed to remove
them. The URLs all ended in `%`

signs, and there's probably some easy
way to delete all the records where the URL ends in a `%`

sign. But I
couldn't remember the syntax offhand, and looking up the escape
sequence for `LIKE`

clauses would have taken a lot longer than what I
did do, which was:

```
delete from table where record_id in (43, 47, 49)
```

So the rule is: giving things ID numbers should be the default, because they are generally useful, like handles you can use to pick things up with. You need a good reason to omit them.

Every week one of the founders of my company sends around a miscellaneous question, and collates the answers, which are shared with everyone on Monday. This week the question was “What's the best advice you've ever heard?”

My first draft went like this:

When I was a freshman in college, I skipped a bunch of physics labs that were part of my physics grade. Toward the end of the semester, with grades looming, I began to regret this and went to the TA, to ask if I could do them anyway. He took me to see the lab director, whose permission was required.

The lab director asked why I'd missed the labs the first time around. I said, truthfully, that I had no good excuse.

As soon as the I had left the room with the TA, he turned to me and whispered fiercely “You should have lied!”

That advice is probably very good, and I am very bad at taking it. I should have written a heartwarming little homily about how my uncle always told me always to always look for the good in people's hearts, or something uplifting like that.

So here I am, not taking that TA's advice, again.

I thought about that for a while and wondered if I could think of anything else to write down. I added:

If you don't like “You should have lied!”, I offer instead “Nothing is often a good thing to do, and always a clever thing to say.”

I thought about *that* for a while and decided that nothing was a much
cleverer thing to say, and I had better take my own advice. So I
scrubbed it all out.

I did finally find a good answer. I told everyone that when I was fifteen, my cousin Alex, who is a chemistry professor, told me never to go anywhere without a pen and paper. That may actually be the best advice I have ever received, and I do think it beats out the TA's.

My current employer uses an online quiz to pre-screen applicants for open positions. The first question on the quiz is a triviality, just to let the candidate get familiar with the submission and testing system. The question is to write a program that copies standard input to standard output. Candidates are allowed to answer the questions using whatever language they prefer.

Sometimes we get candidates who get a zero score on the test. When I see the report that they failed to answer even the trivial question, my first thought is that this should not reflect badly on the candidate. Clearly, the testing system itself is so hard to use that the candidate was unable to submit even a trivial program, and this is a failure of the testing system and not the candidate.

But it has happened more than once that when I look at the candidate's incomplete submissions I see that the problem, at least this time, is not necessarily in the testing system. There is another possible problem that had not even occurred to me. The candidate failed the trivial question because they tried to write the answer in Java.

I am reminded of Dijkstra's remark that the teaching of BASIC should be rated as a criminal offense. Seeing the hapless candidate get bowled over by a question that should be a mere formality makes me wonder if the same might be said of Java.

I'm not sure. It's possible that this is still a failure of the quiz. It's possible that the Java programmers have valuable skills that we could use, despite their inability to produce even a trivial working program in a short amount of time. I could be persuaded, but right now I have a doubtful feeling.

When you learn Perl, Python, Ruby, or Javascript, one of the things you learn is a body of technique for solving problems using hashes, which are an integral part of the language. When you learn Haskell, you similarly learn a body of technique for solving problems with lazy lists and monads. These kinds of powerful general-purpose tools are at the forefront of the language.

But when you learn Java, there aren't any powerful language features
you can use to solve many problems. Instead, you spend your time
learning a body of technique for solving problems *in the language*.
Java has hashes, but if you are aware of them at all, they are just
another piece of the immense `Collections`

library, lost among the
many other sorts of collections, and you have no particular reason to
know about them or think about them. A good course of Java instruction
might emphasize the more useful parts of the Collections, but since
they're just another part of the library it may not be obvious that
hashes are any more or less useful than, say, `AbstractAction`

or
`zipOutputStream`

.

I was a professional Java programmer for three years (in a different
organization), and I have meant for some time to write up my thoughts
about it. I am often very bitter and sarcastic, and I willingly admit
that I am relentlessly negative and disagreeable, so it can be hard to
tell when I am in earnest about liking something. I once tried to
write a complimentary article about
Blosxom, which has
generated my blog since 2006, and I completely failed; people thought
I was being critical, and I had to write a followup
article to clarify, and
people *still* thought I was dissing Blosxom. Because this article
about Java might be confused with sarcastic criticism, I must state
clearly that everything in this article about Java is in earnest, and
should be taken at face value. Including:

### I really like Java

I am glad to have had the experience of programming in Java. I liked
programming in Java mainly because I found it very relaxing. With a
bad language, like say Fortran or `csh`

, you struggle to do anything
at all, and the language fights with you every step of the way
forward. With a good language there is a different kind of struggle,
to take advantage of the language's strengths, to get the maximum
amount of functionality, and to achieve the clearest possible
expression.

Java is neither a good nor a bad language. It is a mediocre language, and there is no struggle. In Haskell or even in Perl you are always worrying about whether you are doing something in the cleanest and the best way. In Java, you can forget about doing it in the cleanest or the best way, because that is impossible. Whatever you do, however hard you try, the code will come out mediocre, verbose, redundant, and bloated, and the only thing you can do is relax and keep turning the crank until the necessary amount of code has come out of the spout. If it takes ten times as much code as it would to program in Haskell, that is all right, because the IDE will generate half of it for you, and you are still being paid to write the other half.

So you turn the crank, draw your paycheck, and you don't have to worry about the fact that it takes at least twice as long and the design is awful. You can't solve any really hard design problems, but there is a book you can use to solve some of the medium-hard ones, and solving those involves cranking out a lot more Java code, for which you will also be paid. You are a coder, your job is to write code, and you write a lot of code, so you are doing your job and everyone is happy.

You will not produce anything really brilliant, but you will probably
not produce anything too terrible either. The project might fail, but
if it does you can probably put the blame somewhere else. After all,
you produced 576 classes that contain 10,000 lines of Java code, all
of it seemingly essential, so you were doing *your* job. And nobody
can glare at you and demand to know why you used 576 classes when you
should have used 50, because in Java doing it with only 50 classes is
probably impossible.

(Different languages have different failure modes. With Perl, the project might fail because you designed and implemented a pile of shit, but there is a clever workaround for any problem, so you might be able to keep it going long enough to hand it off to someone else, and then when it fails it will be their fault, not yours. With Haskell someone probably should have been fired in the first month for choosing to do it in Haskell.)

So yes, I enjoyed programming in Java, and being relieved of the responsibility for producing a quality product. It was pleasant to not have to worry about whether I was doing a good job, or whether I might be writing something hard to understand or to maintain. The code was ridiculously verbose, of course, but that was not my fault. It was all out of my hands.

So I like Java. But it is not a language I would choose for answering test questions, unless maybe the grade was proportional to the number of lines of code written. On the test, you need to finish quickly, so you need to optimize for brevity and expressiveness. Java is many things, but it is neither brief nor expressive.

When I see that some hapless job candidate struggled for 15 minutes and 14 seconds to write a Java program for copying standard input to standard output, and finally gave up, without even getting to the real questions, it makes me sad that their education, which was probably expensive, has not equipped them with with better tools or to do something other than grind out Java code.

(Like everything else in this section, these are notes for a project that was never completed.)

### Introduction

These incomplete notes from 1997-2001 are grappling with the problem of transforming data structures in a language like Perl, Python, Java, Javascript, or even Haskell. A typical problem is to take an input of this type:

```
[
[Chi, Ill],
[NY, NY],
[Alb, NY],
[Spr, Ill],
[Tr, NJ],
[Ev, Ill],
]
```

and to transform it to an output of this type:

```
{ Ill => [Chi, Ev, Spr],
NY => [Alb, NY],
NJ => [Tr],
}
```

One frequently writes code of this sort, and it should be possible to specify the transformation with some sort of high-level declarative syntax that is easier to read and write than the following gibberish:

```
my $out;
for my $pair (@$in) {
push @{$out->{$pair->[0]}}, $pair->[1];
}
for my $k (keys %$out) {
@{$out->{$k}} = sort @{$out->{$k}};
}
```

This is especially horrible in Perl, but it is bad in any language. Here it is in a hypothetical language with a much less crusty syntax:

```
for pair (in.items) :
out[pair[0]].append(pair[1])
for list (out.values) :
list.sort
```

You still can't see what it really going on without executing the code in your head. It is hard for a beginner to write, and hard to anyone to understand.

### Original undated notes from around 1997–1998

Consider this data structure DS1:

```
[
[Chi, Ill],
[NY, NY],
[Alb, NY],
[Spr, Ill], DS1
[Tr, NJ],
[Ev, Ill],
]
```

This could be transformed several ways:

```
{
Chi => Ill,
NY => NY,
Alb => NY,
Spr => Ill, DS2
Tr => NJ,
Ev => Ill,
}
{ Ill => [Chi, Spr, Ev],
NY => [NY, Alb], DS3
NJ => Tr,
}
{ Ill => 3,
NY => 2,
NJ => 1,
}
[ Chi, Ill, NY, NY, Alb, NY, Spr, Ill, Tr, NJ, Ev, Ill] DS4
```

Basic idea: Transform original structure of nesting depth *N* into an
*N*-dimensional table. If *N*th nest is a hash, index table ranks by hash
keys; if an array, index by numbers. So for example, DS1 becomes

```
1 2
1 Chi Ill
2 NY NY
3 Alb NY
4 Spr Ill
5 Tr NJ
6 Ev Ill
```

Or maybe hashes should be handled a little differently? The original basic idea was more about DS2 and transformed it into

```
Ill NY NJ
Chi X
NY X
Alb X
Spr X
Tr X
Ev X
```

Maybe the rule is: For hashes, use a boolean table indexed by keys and values; for arrays, use a string table index by integers.

Notation idea: Assign names to the dimensions of the table, say X and Y. Then denote transformations by:

```
[([X, Y])] (DS1)
{(X => Y)} (DS2)
{X => [Y]} (DS3)
[(X, Y)] (DS4)
```

The (...) are supposed to incdicate a chaining of elements within the larger structure. But maybe this isn't right.

At the bottom: How do we say whether

```
X=>Y, X=>Z
```

turns into

```
[ X => Y, X => Z ] (chaining)
```

or [ X => [Y, Z] ] (accumulation)

Consider

```
A B C
D . . .
E . .
F . .
```

`<...>`

means ITERATE over the thing inside and make a list of the
results. It's basically `map'.

Note that:

```
<X,Y> |= (D,A,D,B,D,C,E,A,E,B,F,A,F,C)
<X,[<Y>]> |= (D,[A,B,C],E,[A,B],F,[A,C])
```

Brackets and braces just mean brackets and braces. Variables at the same level of nesting imply a loop over the cartesian join. Variables subnested imply a nested loop. So:

```
<X,Y> means
for x in X
for y in Y
push @result, (x,y) if present(x,y);
```

But

```
<X,<Y>> means
for x in X
for y in Y
push @yresult, (y) if present(x,y);
push @result, @yresult
```

Hmmm. Maybe there's a better syntax for this.

Well, with this plan:

```
DS1: [ <[X,Y]> ]
DS2: { <X=>Y> }
DS3: { <X => [<Y>]> }
DS4: [ <X, Y> ]
```

It seems pretty flexible. You could just as easily write

```
{ <X => max(<Y>) }
```

and you'd get

```
{ D => C, E => B, F => C }
```

If there's a `count' function, you can get

```
{ D => 3, E => 2, F => 2 }
```

or maybe we'll just overload `scalar' to mean`

count'.

Question: How to invert this process? That's important so that you can ask it to convert one data structure to another. Also, then you could write something like

```
[ <city, state> ] |= { <state => [<city>] > }
```

and omit the X's and Y's.

Real example: From proddir. Given

```
ID / NAME / SHADE / PALETTE / DESC
```

For example:

```
A / AAA / red / pink / Aaa
B / BBB / yellow / tawny / Bbb
A / AAA / green / nude / Aaa
B / BBB / blue / violet / Bbb
C / CCC / black / nude / Ccc
```

Turn this into

```
{ A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
C => [ CCC, [ [black, nude] ], CCC]
}
{ < ID => [
name,
[ <[shade, palette]> ]
desc
]>
}
```

Something interesting happened here. Suppose we have

```
[ [A, B]
[A, B]
]
```

And we ask for `<A, B>`

. Do we get (A, B, A, B), or just (A, B)? Does
it remove duplicate items for us or not? We might want either.

In the example above, why didn't we get

```
{ A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
C => [ CCC, [ [black, nude] ], CCC]
}
```

If the outer iteration was supposed to be over all id-name-desc triples? Maybe we need

```
<...> all triples
<!...!> unique triples only
```

Then you could say

```
<X> |= <!X!>
```

to indicate that you want to uniq a list.

But maybe the old notation already allowed this:

```
<X> |= keys %{< X => 1 >}
```

It's still unclear how to write the example above, which has unique key-triples. But it's in a hash, so it gets uniqed on ID anyway; maybe that's all we need.

### 1999-10-23

Rather than defining some bizarre metalanguage to describe the transformation, it might be easier all around if the user just enters a sample input, a sample desired output, and lets the twingler figure out what to do. Certainly the parser and internal representation will be simpler.

For example:

```
[ [ A, B ],
[ C, B ],
[ D, E ] ]
--------------
{ B => [A, C],
E => [D],
}
```

should be enough for it to figure out that the code is:

```
for my $a1 (@$input) {
my ($e1, $e2) = @$a1;
push @{$output{$e2}}, $e1;
}
```

Advantage: After generating the code, it can run it on the sample input to make sure that the output is correct; otherwise it has a bug.

Input grammar:

```
%token ELEMENT
expr: array | hash ;
array: '[' csl ']' ;
csl: ELEMENT | ELEMENT ',' csl | /* empty */ ;
hash: '{' cspl '}' ;
cspl: pair | pair ',' cspl | /* empty */ ;
pair: ELEMENT '=>' ELEMENT;
```

Simple enough. Note that (...) lines are not allowed. They are only useful at the top level. A later version can allow them. It can replace the outer (...) with [...] or {...] as appropirate when it sees the first top-level separator. (If there is a => at the top level, it is a hash, otherwise an array.)

Idea for code generation: Generate pseudocode first. Then translate to Perl. Then you can insert a peephole optimizer later. For example

```
foreachkey k (somehash) {
push somearray, $somehash{k}
}
```

could be optimized to

```
somearray = values somehash;
```

add into hash: as key, add into value, replace value add into array: at end only

How do we analyze something like:

```
[ [ A, B ],
[ C, B ],
[ D, E ] ]
--------------
{ B => [A, C],
E => [D],
}
```

Idea: Analyze structure of input. Analyze structure of output and figure out an expression to deposit each kind of output item. Iterate over input items. Collect all input items into variables. Deposit items into output in appropriate places.

For an input array, tag the items with index numbers. See where the indices go in the output. Try to discern a pattern. The above example:

```
Try #1:
A: 1
B: 2
C: 1
B: 2 -- consistent with B above
D: 1
E: 2
Output: 2 => [1, 1]
2 => [1]
```

OK—2s are keys, 1s are array elements.

A different try fails:

```
A: 1
B: 1
C: 2
B: 2 -- inconsistent, give up on this.
```

Now consider:

```
[ [ A, B ],
[ C, B ],
[ D, E ] ]
--------------
{ A => B,
C => B,
D => E,
}
```

A,C,D get 1; B,E get 2. this works again. 1s are keys, 2s are values.

I need a way of describing an element of a nested data structure as a simple descriptor so that I can figure out the mappings between descriptors. For arrays and nested arrays, it's pretty easy: Use the sequence of numeric indices. What about hashes? Just K/V? Or does V need to be qualified with the key perhaps?

Example above:

```
IN: A:11 B:12 22 C:21 D:31 E:32
OUT: A:K B:V C:K D:K E:V
```

Now try to find a mapping from the top set of labels to the bottom.
`x1 => K, x2 => V`

works.

Problem with this:

```
[ [ A, B ],
[ B, C ],
]
------------
{ A => B,
B => C,
}
```

is unresolvable. Still, maybe this works well enough in most common cases.

Let's consider:

```
[[ A , AAA , red , pink , Aaa],
[ B , BBB , yellow , tawny , Bbb],
[ A , AAA , green , nude , Aaa],
[ B , BBB , blue , violet , Bbb],
[ C , CCC , black , nude , Ccc],
]
-------------------------------------------------------------
{ A => [ AAA, [ [red, pink], [green, nude] ], Aaa],
B => [ BBB, [ [yellow, tawny], [blue, violet] ], Bbb],
C => [ CCC, [ [black, nude] ], CCC]
}
A: 00,20 => K
AAA: 01,21 => V0
red: 02 => V100
pink: 03 => V101
Aaa: 04 => V2
B: 10,30 => K
C: 40 => K
```

etc.

Conclusion: `x0 => K; x1 => V0; x2 => V100; x3 => V101; x4 => V2`

How to reverse?

Simpler reverse example:

```
{ A => [ B, C ],
E => [ D ],
}
---------------------
[ [ A, B ],
[ A, C ],
[ E, D ],
]
A: K => 00, 10
B: V0 => 01
C: V1 => 11
D: V0 => 21
E: K => 20
```

Conclusion: `K => x0; V => x1`

This isn't enough information. Really, `V => k1`

, where `k`

is whatever
the key was!

What if V items have the associated key too?

```
A: K => 00, 10
B: V{A}0=> 01
C: V{A}1=> 11
D: V{E}0=> 21
E: K => 20
```

Now there's enough information to realize that B amd C stay with the A, if we're smart enough to figure out how to use it.

### 2001-07-28

Sent to Nyk Cowham

### 2001-08-24

Sent to Timur Shtatland

### 2001-10-28

Here's a great example. The output from `HTML::LinkExtor`

is a list
like

```
([img, src, URL1, losrc, URL2],
[a, href, URL3],
...
)
```

we want to transform this into

```
(URL1 => undef,
URL2 => undef,
URL3 => undef,
...
)
```

**Next page**

**»**© All content and copyrights belong to their respective authors.

**«**

**»**© FeedShow - Online RSS Feeds Reader