» Publishers, Monetize your RSS feeds with FeedShow: More infos (Show/Hide Ads)

[This article was published last month on the math.stackexchange blog, which seems to have died young, despite many earnest-sounding promises beforehand from people who claimed they would contribute material. I am repatriating it here.]

A recent question on math.stackexchange asks for the smallest positive integer for which the number has the same decimal digits in some other order.

Math geeks may immediately realize that has this property, because it is the first 6 digits of the decimal expansion of , and the cyclic behavior of the decimal expansion of is well-known. But is this the *minimal* solution? It is not. Brute-force enumeration of the solutions quickly reveals that there are 12 solutions of 6 digits each, all permutations of , and that larger solutions, such as 1025874 and 1257489 seem to follow a similar pattern. What is happening here?

Stuck in Dallas-Fort Worth airport one weekend, I did some work on the problem, and although I wasn't able to solve it completely, I made significant progress. I found a method that allows one to hand-calculate that there is no solution with fewer than six digits, and to enumerate all the solutions with 6 digits, including the minimal one. I found an explanation for the surprising behavior that solutions tend to be permutations of one another. The short form of the explanation is that there are fairly strict conditions on which *sets* of digits can appear in a solution of the problem. But once the set of digits is chosen, the conditions on that *order* of the digits in the solution are fairly lax.

So one typically sees, not only in base 10 but in other bases, that the solutions to this problem fall into a few classes that are all permutations of one another; this is exactly what happens in base 10 where all the 6-digit solutions are permutations of . As the number of digits is allowed to increase, the strict first set of conditions relaxes a little, and other digit groups appear as solutions.

### Notation

The property of interest, , is that the numbers and have exactly the same base- digits. We would like to find numbers having property for various , and we are most interested in . Suppose is an -digit numeral having property ; let the (base-) digits of be and similarly the digits of are . The reader is encouraged to keep in mind the simple example of which we will bring up from time to time.

Since the digits of and are the same, in a different order, we may say that for some permutation . In general might have more than one cycle, but we will suppose that is a single cycle. All the following discussion of will apply to the individual cycles of in the case that is a product of two or more cycles. For our example of , we have in cycle notation. We won't need to worry about the details of , except to note that completely exhaust the indices , and that because is an -cycle.

### Conditions on the set of digits in a solution

For each we have $$a_{P(i)} = b_{i} \equiv 2a_{i} + c_i\pmod R $$ where the ‘carry bit’ is either 0 or 1 and depends on whether there was a carry when doubling . (When we are in the rightmost position and there is never a carry, so .) We can then write:

$$\begin{align} a_{P(P(i))} &= 2a_{P(i)} + c_{P(i)} \\ &= 2(2a_{i} + c_i) + c_{P(i)} &&= 4a_i + 2c_i + c_{P(i)}\\ a_{P(P(P(i)))} &= 2(4a_i + 2c_i + c_{P(P(i)})) + c_{P(i)} &&= 8a_i + 4c_i + 2c_{P(i)} + c_{P(P(i))}\\ &&&\vdots\\ a_{P^n(i)} &&&= 2^na_i + v \end{align} $$

all equations taken . But since is an -cycle, , so we have $$a_i \equiv 2^na_i + v\pmod R$$ or equivalently $$\big(2^n-1\big)a_i + v \equiv 0\pmod R\tag{$\star$}$$ where depends only on the values of the carry bits —the are precisely the binary digits of .

Specifying a particular value of and that satisfy this equation completely determines all the . For example, is a solution when because , and this solution allows us to compute

$$\def\db#1{\color{darkblue}{#1}}\begin{align} a_0&&&=2\\ a_{P(0)} &= 2a_0 &+ \db0 &= 4\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 0 \\ a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 1\\ \hline a_{P^4(0)} &= 2a_{P^3(0)} &+ \db0 &= 2\\ \end{align}$$

where the carry bits are visible in the third column, and all the sums are taken . Note that as promised. This derivation of the entire set of from a single one plus a choice of is crucial, so let's see one more example. Let's consider . Then we want to choose and so that where . One possible solution is . Then we can derive the other as follows:

$$\begin{align} a_0&&&=5\\ a_{P(0)} &= 2a_0 &+ \db1 &= 1\\ a_{P^2(0)} &= 2a_{P(0)} &+ \db0 &= 2 \\\hline a_{P^3(0)} &= 2a_{P^2(0)} &+ \db1 &= 5\\ \end{align}$$

And again we have as required.

Since the bits of are used cyclically, not every pair of will yield a different solution. Rotating the bits of and pairing them with different choices of will yield the same cycle of digits starting from a different place. In the first example above, we had . If we were to take (which also solves ) we would get the same cycle of values of the but starting from instead of from , and similarly if we take or . So we can narrow down the solution set of by considering only the so-called bracelets of rather than all possible values. Two values of are considered equivalent as bracelets if one is a rotation of the other. When a set of -values are equivalent as bracelets, we need only consider one of them; the others will give the same cyclic sequence of digits, but starting in a different place. For , for example, the bracelets are and ; the sequences and being equivalent to , and so on.

#### Example

Let us take , so we want to find 3-digit numerals with property . According to we need where . There are 9 possible values for ; for each one there is at most one possible value of that makes the sum zero:

$$\pi \approx 3 $$

$$\begin{array}{rrr} a_i & 7a_i & v \\ \hline 0 & 0 & 0 \\ 1 & 7 & 2 \\ 2 & 14 & 4 \\ 3 & 21 & 6 \\ 4 & 28 & \\ 5 & 35 & 1 \\ 6 & 42 & 3 \\ 7 & 49 & 5 \\ 8 & 56 & 7 \\ \end{array} $$

(For there is no solution.) We may disregard the non-bracelet values of , as these will give us solutions that are the same as those given by bracelet values of . The bracelets are:

$$\begin{array}{rl} 000 & 0 \\ 001 & 1 \\ 011 & 3 \\ 111 & 7 \end{array}$$

so we may disregard the solutions exacpt when . Calculating the digit sequences from these four values of and the corresponding we find:

$$\begin{array}{ccl} a_0 & v & \text{digits} \\ \hline 0 & 0 & 000 \\ 5 & 1 & 512 \\ 6 & 3 & 637 \\ 8 & 7 & 888 \ \end{array} $$

(In the second line, for example, we have , so and .)

Any number of three digits, for which contains exactly the same three digits, in base 9, must therefore consist of exactly the digits or .

#### A warning

All the foregoing assumes that the permutation is *a single cycle*. In general, it may not be. Suppose we did an analysis like that above for and found that there was no possible digit set, other than the trivial set `00000`

, that satisfied the governing equation . This would *not* completely rule out a base-10 solution with 5 digits, because the analysis only rules out a *cyclic* set of digits. There could still be a solution where was a product of a and a -cycle, or a product of still smaller cycles.

Something like this occurs, for example, in the case. Solving the governing equation yields only four possible digit cycles, namely , and . But there are several additional solutions: and . These correspond to permutations with more than one cycle. In the case of , for example, exchanges the and the , and leaves the and the fixed.

For this reason we cannot rule out the possibility of an -digit solution without first considering all smaller .

#### The Large Equals Odd rule

When is even there is a simple condition we can use to rule out certain sets of digits from being single-cycle solutions. Recall that and . Let us agree that a digit is *large* if and *small* otherwise. That is, is large if, upon doubling, it causes a carry into the next column to the left.

Since , where the are carry bits, we see that, except for , the digit is odd precisely when there is a carry from the next column to the right, which occurs precisely when is large. Thus the number of odd digits among is equal to the number of large digits among .

This leaves the digits and uncounted. But is never odd, since there is never a carry in the rightmost position, and is always small (since otherwise would have digits, which is not allowed). So the number of large digits in is exactly equal to the number of odd digits in . And since and have exactly the same digits, the number of large digits in is equal to the number of odd digits in . Observe that this is the case for our running example : there is one odd digit and one large digit (the 4).

When is odd the analogous condition is somewhat more complicated, but since the main case of interest is , we have the useful rule that:

For even, the number of odd digits in any solution is equal to the number of large digits in .

# Conditions on the order of digits in a solution

We have determined, using the above method, that the digits might form a base-9 numeral with property . Now we would like to arrange them into a base-9 numeral that actually does have that property. Again let us write and , with . Note that if , then (if there was a carry from the next column to the right) or (if there was no carry), but since is impossible, we must have and therefore must be small, since there is no carry into position . But since is also one of , and it cannot also be , it must be . This shows that the 1, unless it appears in the rightmost position, must be to the left of the ; it cannot be to the left of the . Similarly, if then , because is impossible, so the must be to the left of a large digit, which must be the . Similar reasoning produces no constraint on the position of the ; it could be to the left of a small digit (in which case it doubles to ) or a large digit (in which case it doubles to ). We can summarize these findings as follows:

$$\begin{array}{cl} \text{digit} & \text{to the left of} \\ \hline 1 & 1, 2, \text{end} \\ 2 & 5 \\ 5 & 1,2,5,\text{end} \end{array}$$

Here “end” means that the indicated digit could be the rightmost.

Furthermore, the left digit of must be small (or else there would be a carry in the leftmost place and would have 4 digits instead of 3) so it must be either 1 or 2. It is not hard to see from this table that the digits must be in the order or , and indeed, both of those numbers have the required property: , and .

This was a simple example, but in more complicated cases it is helpful to draw the order constraints as a graph. Suppose we draw a graph with one vertex for each digit, and one additional vertex to represent the end of the numeral. The graph has an edge from vertex to whenever can appear to the left of . Then the graph drawn for the table above looks like this:

A 3-digit numeral with property corresponds to a path in this graph that starts at one of the nonzero small digits (marked in blue), ends at the red node marked ‘end’, and visits each node exactly once. Such a path is called *hamiltonian*. Obviously, self-loops never occur in a hamiltonian path, so we will omit them from future diagrams.

Now we will consider the digit set , again base 9. An analysis similar to the foregoing allows us to construct the following graph:

Here it is immediately clear that the only hamiltonian path is , and indeed, .

In general there might be multiple instances of a digit, and so multiple nodes labeled with that digit. Analysis of the case produces a graph with no legal start nodes and so no solutions, unless leading zeroes are allowed, in which case is a perfectly valid solution. Analysis of the case produces a graph with no path to the end node and so no solutions. These two trivial patterns appear for all and all , and we will ignore them from now on.

Returning to our ongoing example, in base 8, we see that and must double to and , so must be to the left of small digits, but and can double to either or and so could be to the left of anything. Here the constraints are so lax that the graph doesn't help us narrow them down much:

Observing that the only arrow into the 4 is from 0, so that the 4 must follow the 0, and that the entire number must begin with 1 or 2, we can enumerate the solutions:

1042 1204 2041 2104

If leading zeroes are allowed we have also:

0412 0421

All of these are solutions in base 8.

### The case of

Now we turn to our main problem, solutions in base 10.

To find *all* the solutions of length 6 requires an enumeration of smaller solutions, which, if they existed, might be concatenated into a solution of length 6. This is because our analysis of the digit sets that can appear in a solution assumes that the digits are permuted *cyclically*; that is, the permutations that we considered had only one cycle each. If we perform the analy

There are no smaller solutions, but to prove that the length 6 solutions are minimal, we must analyze the cases for smaller and rule them out. We now produce a complete analysis of the base 10 case with and . For there is only the trivial solution of , which we disregard. (The question asked for a positive number anyway.)

For , we want to find solutions of where is a two-bit bracelet number, one of or . Tabulating the values of and that solve this equation we get:

$$\begin{array}{ccc} v& a_i \\ \hline 0 & 0 \\ 1& 3 \\ 3& 9 \\ \end{array}$$

We can disregard the and solutions because the former yields the trivial solution and the latter yields the nonsolution . So the only possibility we need to investigate further is , which corresponds to the digit sequence : Doubling gives us and doubling , plus a carry, gives us again.

But when we tabulate of which digits must be left of which informs us that there is no solution with just and , because the graph we get, once self-loops are eliminated, looks like this:

which obviously has no hamiltonian path. Thus there is no solution for .

For we need to solve the equation where is a bracelet number in , specifically one of or . Since and are relatively prime, for each there is a single that solves the equation. Tabulating the possible values of as before, and this time omitting rows with no solution, we have:

$$\begin{array}{rrl} v & a_i & \text{digits}\\ \hline 0& 0 & 000\\ 1& 7 & 748 \\ 3& 1 & 125\\ 7&9 & 999\\ \end{array}$$

The digit sequences and yield trivial solutions or nonsolutions as usual, and we will omit them in the future. The other two lines suggest the digit sets and , both of which fails the “odd equals large” rule.

This analysis rules out the possibility of a digit set with , but it does not *completely* rule out a 3-digit solution, since one could be obtained by concatenating a one-digit and a two-digit solution, or three one-digit solutions. However, we know by now that no one- or two-digit solutions exist. Therefore there are no 3-digit solutions in base 10.

For the governing equation is where is a 4-bit bracelet number, one of . This is a little more complicated because . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} a_i & 15a_i& v & \text{digits}\\ \hline 0 & 0 & 0 & 0000\\ 1 & 5 & 5 & 1250\\ 1 & 5 & 15 & 1375\\ 2 & 0 & 0 & 2486\\ 3 & 5 & 5 & 3749\\ 3 & 5 & 15 & 3751\\ 4 & 0 & 0 & 4862\\ 5 & 5 & 5 & 5012\\ 5 & 5 & 5 & 5137\\ 6 & 0 & 0 & 6248\\ 7 & 5 & 5 & 7493\\ 7 & 5 & 5 & 7513\\ 8 & 0 & 0 & 8624 \\ 9 & 5 & 5 & 9874\\ 9 & 5 & 15 & 9999 \\ \end{array}$$

where the second column has been reduced mod . Note that even restricting to bracelet numbers the table still contains duplicate digit sequences; the 15 entries on the right contain only the six basic sequences , and . Of these, only and obey the odd equals large criterion, and we will disregard and as usual, leaving only . We construct the corresponding graph for this digit set as follows: must double to , not , so must be left of a large number or . Similarly must be left of or . must also double to , so must be left of . Finally, must double to , so must be left of or the end of the numeral. The corresponding graph is:

which evidently has no hamiltonian path: whichever of 3 or 4 we start at, we cannot visit the other without passing through 7, and then we cannot reach the end node without passing through 7 a second time. So there is no solution with and .

We leave this case as an exercise. There are 8 solutions to the governing equation, all of which are ruled out by the odd equals large rule.

For the possible solutions are given by the governing equation where is a 6-bit bracelet number, one of . Tabulating the possible digit sets, we get:

$$\begin{array}{crrl} v & a_i & \text{digits}\\ \hline 0 & 0 & 000000\\ 1 & 3 & 362486 \\ 3 & 9 & 986249 \\ 5 & 5 & 500012 \\ 7 & 1 & 124875 \\ 9 & 7 & 748748 \\ 11 & 3 & 362501 \\ 13 & 9 & 986374 \\ 15 & 5 & 500137 \\ 21 & 3 & 363636 \\ 23 & 9 & 989899 \\ 27 & 1 & 125125 \\ 31 & 3 & 363751 \\ 63 & 9 & 999999 \\ \end{array}$$

After ignoring and as usual, the large equals odd rule allows us to ignore all the other sequences except and . The latter fails for the same reason that did when . But , the lone survivor, gives us a complicated derived graph containing many hamiltonian paths, every one of which is a solution to the problem:

It is not hard to pick out from this graph the minimal solution , for which , and also our old friend for which .

We see here the reason why all the small numbers with property contain the digits . The constraints on *which* digits can appear in a solution are quite strict, and rule out all other sequences of six digits and all shorter sequences. But once a set of digits passes these stringent conditions, the constraints on it are much looser, because is only required to have the digits of in *some* order, and there are many possible orders, many of which will satisfy the rather loose conditions involving the distribution of the carry bits. This graph is typical: it has a set of small nodes and a set of large nodes, and each node is connected to either *all* the small nodes or *all* the large nodes, so that the graph has many edges, and, as in this case, a largish clique of small nodes and a largish clique of large nodes, and as a result many hamiltonian paths.

### Onward

This analysis is tedious but is simple enough to perform by hand in under an hour. As increases further, enumerating the solutions of the governing equation becomes very time-consuming. I wrote a simple computer program to perform the analysis for given and , and to emit the possible digit sets that satisfied the large equals odd criterion. I had wondered if *every* base-10 solution contained equal numbers of the digits and . This is the case for (where the only admissible digit set is ), for (where the only admissible sets are and ), and for (where the only admissible sets are and ). But when we reach the increasing number of bracelets has loosened up the requirements a little and there are 5 admissible digit sets. I picked two of the promising-seeming ones and quickly found by hand the solutions and , both of which wreck any theory that the digits must all appear the same number of times.

### Acknowledgments

Thanks to Karl Kronenfeld for corrections and many helpful suggestions.

As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this:

Then you sew four of these patches into a four-patch, say like this:

Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this:

It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72:

It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity.

My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center.

Some have a pinwheel like this:

which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond:

I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar.

The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components.

Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:

Here are two of the 12 ways to orient the patches in a four-patch:

Each 16-patch is made of four four-patches, and you must imagine that the
four-patches shown above are in the *upper-left* position in the
16-patch. Then symmetry of the 16-patch block means that triangles with the
same label are in positions that are symmetric with respect to the
entire block. For example, the two triangles labeled `b`

are on
opposite sides of the block's northwest-southeast diagonal. But there
is no symmetry of the full 16-patch block that carries triangle `d`

to
triangle `g`

, because `d`

is on the edge of the block, while `g`

is in the interior.

Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring.

A block might, of course, have patches in both orientations:

All the blocks with diagonals oriented this way are assigned
descriptors made from the letters `bbdefgii`

.

Once you have chosen one of the 12 ways to orient the diagonals in the
four-patch, you still have to color the patches. A descriptor like
`bbeeffii`

describes the orientation of the diagonal lines in the
squares, but it does not describe the way the four patches are
colored; there are between 4 and 8 ways to color each sort of
four-patch. For example, the `bbeeffii`

four-patch shown earlier can be colored in six different ways:

In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.)

We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:

`b/bf/ee/fi/i`

because there's an isolated white `b`

triangle, then a black parallelogram
made of a `b`

and an `f`

patch, then a white triangle made from the
two white `e`

triangles, then another parallelogram made from the black `f`

and `i`

, and finally in the middle, the white `i`

. (The two white `e`

triangles appear to be separated, but when four of these four-patches are
joined into a 16-patch block, the two white `e`

patches will be
adjacent and will form a single large triangle: )

The other five `bbeeffii`

four-patches are, in the same order they are shown above:

```
b/b/e/e/f/f/i/i
b/b/e/e/fi/fi
b/bfi/ee/f/i
bfi/bfi/e/e
bf/bf/e/e/i/i
```

All six have `bbeeffii`

, but grouped differently depending on the
colorings. The second one ( `b/b/e/e/f/f/i/i`

) has no regions with
more than one triangle; the fifth (
`bfi/bfi/e/e`

) has two large regions of three triangles each, and two
isolated triangles. In the latter four-patch, the `bfi`

in the
descriptor has three letters because the patch has a corresponding
distinctive component made of three triangles.

I made up a list of the descriptors for all 72 blocks; I think I did
this by hand. (The work directory contains a `blocks`

file that maps
blocks to their descriptors, but the
`Makefile`

does
not say how to build it, suggesting that it was not automatically
built.) From this list one can automatically
extract a
list of descriptors of interesting
shapes: an
interesting shape is two or more letters that appear together in some
descriptor. (Or it can be the single letter `j`

, which is
exceptional; see below.) For example, `bffh`

represents a distinctive
component. It can only occur in a patch that has a `b`

, two `f`

s, and
an `h`

, like this one:

and it will only be significant if the `b`

, the two `f`

s, and the `h`

are
the same color:

in which case you get this distinctive and interesting-looking hook component.

There is only one block that includes this distinctive hook component;
it has descriptor `b/bffh/ee/j`

, and looks like this: . But some of the distinctive
components are more common. The `ee`

component represents the large white half-diamonds on the four sides.
A block with "ee" in its descriptor always looks like this:

and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this:

(The stippled areas vary from block to block, but the blocks with `ee`

in their descriptors always have the half-diamonds as shown.)

The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html
all have the `ee`

component. There are many differences between them, but
they all have the half-diamonds in common.

Other distinctive components have similar short descriptors. The two pinwheels I
mentioned above are
`gh`

and
`fi`

, respectively; if you look at
the list of `gh`

blocks
and
the list of `fi`

blocks
you'll see all the blocks with each kind of pinwheel.

Descriptor `j`

is an exception. It makes an interesting shape all by itself,
because any block whose patches have `j`

in their descriptor will have
a distinctive-looking diamond component in the center. The four-patch looks like
this:

so the full sixteen-patch looks like this:

where the stippled parts can vary. A look at the list of blocks with
component
`j`

will
confirm that they all have this basic similarity.

I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar.

Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar.

This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved finding polygons in each block that had white inside and black outside, or vice versa.) I was satisfied enough with this second attempt that I considered the project a success and stopped work on it.

The complete final results were:

- This tabulation of blocks that are somewhat similar
- This tabulation of blocks that are distinctly similar (This is
*the*final product; I consider this a sufficiently definitive listing of “similar blocks”.) - This tabulation of blocks that are extremely similar

And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j

It may also be interesting to browse the work directory.

A few weeks ago I asked people to predict, without trying it first, what this would print:

```
perl -le 'print(two + two == five ? "true" : "false")'
```

(If you haven't seen this yet, I recommend that you guess, and then test your guess, before reading the rest of this article.)

People familiar with Perl guess that it will print `true`

; that is
what I guessed. The reasoning is as follows: Perl is willing to treat
the unquoted strings `two`

and `five`

as strings, as if they had been
quoted, and is also happy to use the `+`

and `==`

operators on them,
converting the strings to numbers in its usual way. If the strings
had looked like `"2"`

and `"5"`

Perl would have treated them as 2 and
5, but as they don't look like decimal numerals, Perl interprets them
as zeroes. (Perl wants to issue a warning about this, but the warning is not enabled by default.
Since the `two`

and `five`

are treated as
zeroes, the result of the `==`

comparison are true, and the string
`"true"`

should be selected and printed.

So far this is a little bit odd, but not excessively odd; it's the
sort of thing you expect from programming languages, all of which more
or less suck. For example, Python's behavior, although different, is
about equally peculiar. Although Python does require that the strings
`two`

and `five`

be quoted, it is happy to do its own peculiar thing
with `"two" + "two" == "five"`

, which happens to be false: in Python
the `+`

operator is overloaded and has completely different behaviors
on strings and numbers, so that while in Perl `"2" + "2"`

is the
number 4, in Python is it is the string `22`

, and `"two" + "two"`

yields the string `"twotwo"`

. Had the program above actually printed
`true`

, as I expected it would, or even `false`

, I would not have
found it remarkable.

However, this is not what the program does do. The explanation of two paragraphs earlier is totally wrong. Instead, the program prints nothing, and the reason is incredibly convoluted and bizarre.

First, you must know that `print`

has an optional first argument. (I
have plans for an article about how optional first argmuents are almost
always a bad move, but contrary to my usual practice I will not insert
it here.) In Perl, the `print`

function can be invoked in two ways:

```
print HANDLE $a, $b, $c, …;
print $a, $b, $c, …;
```

The former prints out the list `$a, $b, $c, …`

to the filehandle
`HANDLE`

; the latter uses the default handle, which typically points
at the terminal. How does Perl decide which of these forms is being
used? Specifically, in the second form, how does it know that `$a`

is
one of the items to be printed, rather than a variable containing the filehandle
to print to?

The answer to this question is further complicated by the fact that
the `HANDLE`

in the first form could be either an unquoted string,
which is the name of the handle to print to, or it could be a variable
containing a filehandle value. Both of these `print`

s should do the
same thing:

```
my $handle = \*STDERR;
print STDERR $a, $b, $c;
print $handle $a, $b, $c;
```

Perl's method to decide whether a particular `print`

uses an explicit
or the default handle is a somewhat complicated heuristic. The basic
rule is that the filehandle, if present, can be distinguished because
its trailing comma is omitted. But if the filehandle were allowed to
be the result of an arbitrary expression, it might be difficult for
the parser to decide where there was a a comma; consider the
hypothetical expression:

```
print $a += EXPRESSION, $b $c, $d, $e;
```

Here the intention is that the `$a += EXPRESSION, $b`

expression
calculates the filehandle value (which is actually retrieved from `$b`

, the
`$a += …`

part being executed only for its side effect) and the
remaining `$c, $d, $e`

are the values to be printed. To allow this
sort of thing would be way too confusing to both Perl and to the
programmer. So there is the further rule that the filehandle
expression, if present, must be short, either a simple scalar
variable such as `$fh`

, or a bare unqoted string that is in the right
format for a filehandle name, such as `HANDLE``. Then the parser need
only peek ahead a token or two to see if there is an upcoming comma.

So for example, in

```
print STDERR $a, $b, $c;
```

the `print`

is immediately followed by `STDERR`

, which could be a
filehandle name, and `STDERR`

is not followed by a comma, so `STDERR`

is taken to be the name of the output handle. And in

```
print $x, $a, $b, $c;
```

the `print`

is immediately followed by the simple scalar value `$x`

,
but this `$x`

is followed by a comma, so is considered one of the
things to be printed, and the target of the `print`

is the default
output handle.

In

```
print STDERR, $a, $b, $c;
```

Perl has a puzzle: `STDERR`

looks like a filehandle, but it is
followed by a comma. This is a compile-time error; Perl complains “No
comma allowed after filehandle” and aborts. If you want to print the
literal string `STDERR`

, you must quote it, and if you want to print A, B,
and C to the standard error handle, you must omit the first comma.

Now we return the the original example.

```
perl -le 'print(two + two == five ? "true" : "false")'
```

Here Perl sees the unquoted string `two`

which could be a filehandle
name, and which is not followed by a comma. So it takes the first
`two`

to be the output handle name. Then it evaluates the expression

```
+ two == five ? "true" : "false"
```

and obtains the value `true`

. (The leading `+`

is a unary plus
operator, which is a no-op. The bare `two`

and `five`

are taken to be
string constants, which, compared with the numeric `==`

operator, are
considered to be numerically zero, eliciting the same warning that I
mentioned earlier that I had not enabled. Thus the comparison Perl
actually does is is 0 == 0, which is true, and the resulting string is
`true`

.)

This value, the string `true`

, is then printed to the filehandle named
`two`

. Had we previously opened such a filehandle, say with

```
open two, ">", "output-file";
```

then the output would have been sent to the filehandle as usual.
Printing to a non-open filehandle elicits an optional warning from
Perl, but as I mentioned, I have not enabled warnings, so the `print`

silently fails, yielding a false value.

Had I enabled those optional warnings, we would have seen a plethora of them:

```
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "five" may clash with future reserved word at -e line 1.
Name "main::two" used only once: possible typo at -e line 1.
Argument "five" isn't numeric in numeric eq (==) at -e line 1.
Argument "two" isn't numeric in numeric eq (==) at -e line 1.
print() on unopened filehandle two at -e line 1.
```

(The first four are compile-time warnings; the last three are issued
at execution time.) The crucial warning is the one at the end,
advising us that the output of `print`

was directed to the filehandle
`two`

which was never opened for output.

[ Addendum 20140718: I keep thinking of the following remark of Edsger W. Dijkstra:

[This phenomenon] takes one of two different forms: one programmer places a one-line program on the desk of another and … says, "Guess what it does!" From this observation we must conclude that this language as a tool is an open invitation for clever tricks; and while exactly this may be the explanation for some of its appeal, viz., to those who like to show how clever they are, I am sorry, but I must regard this as one of the most damning things that can be said about a programming language.

But my intent is different than what Dijkstra describes. His programmer is proud, but I am discgusted. Incidentally, I believe that Dijkstra was discussing APL here. ]

As I've discussed elsewhere, I once wrote a program to enumerate all the possible quilt blocks of a certain type. The quilt blocks in question are, in quilt jargon, sixteen-patch half-square triangles. A half-square triangle, also called a “patch”, is two triangles of fabric sewn together, like this:

Then you sew four of these patches into a four-patch, say like this:

Then to make a sixteen-patch block of the type I was considering, you take four identical four-patch blocks, and sew them together with rotational symmetry, like this:

It turns out that there are exactly 72 different ways to do this. (Blocks equivalent under a reflection are considered the same, as are blocks obtained by exchanging the roles of black and white, which are merely stand-ins for arbitrary colors to be chosen later.) Here is the complete set of 72:

It's immediately clear that some of these resemble one another, sometimes so strongly that it can be hard to tell how they differ, while others are very distinctive and unique-seeming. I wanted to make the computer classify the blocks on the basis of similarity.

My idea was to try to find a way to get the computer to notice which blocks have distinctive components of one color. For example, many blocks have a distinctive diamond shape in the center.

Some have a pinwheel like this:

which also has the diamond in the middle, while others have a different kind of pinwheel with no diamond:

I wanted to enumerate such components and ask the computer to list which blocks contained which shapes; then group them by similarity, the idea being that that blocks with the same distinctive components are similar.

The program suite uses a compact notation of blocks and of shapes that makes it easy to figure out which blocks contain which distinctive components.

Since each block is made of four identical four-patches, it's enough just to examine the four-patches. Each of the half-square triangle patches can be oriented in two ways:

Here are two of the 12 ways to orient the patches in a four-patch:

Each 16-patch is made of four four-patches, and you must imagine that the
four-patches shown above are in the *upper-left* position in the
16-patch. Then symmetry of the 16-patch block means that triangles with the
same label are in positions that are symmetric with respect to the
entire block. For example, the two triangles labeled `b`

are on
opposite sides of the block's northwest-southeast diagonal. But there
is no symmetry of the full 16-patch block that carries triangle `d`

to
triangle `g`

, because `d`

is on the edge of the block, while `g`

is in the interior.

Triangles must be colored opposite colors if they are part of the same patch, but other than that there are no constraints on the coloring.

A block might, of course, have patches in both orientations:

All the blocks with diagonals oriented this way are assigned
descriptors made from the letters `bbdefgii`

.

Once you have chosen one of the 12 ways to orient the diagonals in the
four-patch, you still have to color the patches. A descriptor like
`bbeeffii`

describes the orientation of the diagonal lines in the
squares, but it does not describe the way the four patches are
colored; there are between 4 and 8 ways to color each sort of
four-patch. For example, the `bbeeffii`

four-patch shown earlier can be colored in six different ways:

In each case, all four diagonals run from northwest to southeast. (All other ways of coloring this four-patch are equivalent to one of these under one or more of rotation, reflection, and exchange of black and white.)

We can describe a patch by listing the descriptors of the eight triangles, grouped by which triangles form connected regions. For example, the first block above is:

`b/bf/ee/fi/i`

because there's an isolated white `b`

triangle, then a black parallelogram
made of a `b`

and an `f`

patch, then a white triangle made from the
two white `e`

triangles then another parallelogram made from the black `f`

and `i`

, and finally in the middle, the white `i`

. (The two white `e`

triangles appear to be separated, but when four of these four-patches are
joined into a 16-patch block, the two white `e`

patches will be
adjacent and will form a single large triangle: )

The other five `bbeeffii`

blocks are, in the same order they are shown above:

```
b/b/e/e/f/f/i/i
b/b/e/e/fi/fi
b/bfi/ee/f/i
bfi/bfi/e/e
bf/bf/e/e/i/i
```

All six have `bbeeffii`

, but grouped differently depending on the
colorings. The second one ( `b/b/e/e/f/f/i/i`

) has no regions with
more than one triangle; the fifth (
`bfi/bfi/e/e`

) has two large regions of three triangles each, and two
isolated triangles. In the latter four-patch, the `bfi`

in the
descriptor has three letters because the patch has a corresponding
distinctive component made of three triangles.

I made up a list of the descriptors for all 72 blocks; I think I did
this by hand. (The work directory contains a `blocks`

file that maps
blocks to their descriptors, but the
`Makefile`

does
not say how to build it, suggesting that it was not automatically
built.) From this list one can automatically
extract a
list of descriptors of interesting
shapes: an
interesting shape is two or more letters that appear together in some
descriptor. (Or it can be the single letter `j`

, which is
exceptional; see below.) For example, `bffh`

represents a distinctive
component. It can only occur in a patch that has a `b`

, two `f`

s, and
an `h`

, like this one:

and it will only be significant if the `b`

, the two `f`

s, and the `h`

are
the same color:

in which case you get this distinctive and interesting-looking hook component.

There is only one block that includes this distinctive hook component;
it has descriptor `b/bffh/ee/j`

, and looks like this: . But some of the distinctive
components are more common. The `ee`

component represents the large white half-diamonds on the four sides.
A block with "ee" in its descriptor always looks like this:

and the blocks formed from such patches always have a distinctive half-diamond component on each edge, like this:

(The stippled areas vary from block to block, but the blocks with `ee`

in their descriptors always have the half-diamonds as shown.)

The blocks listed at http://hop.perl.plover.com/quilt/analysis/images/ee.html
all have the `ee`

component. There are many differences between them, but
they all have the half-diamonds in common.

Other distinctive components have similar short descriptors. The two pinwheels I
mentioned above are
`gh`

and
`fi`

, respectively; if you look at
the list of `gh`

blocks
and
the list of `fi`

blocks
you'll see all the blocks with each kind of pinwheel.

Descriptor `j`

is an exception. It makes an interesting shape all by itself,
because any block whose patches have `j`

in their descriptor will have
a distinctive-looking diamond component in the center. The four-patch looks like
this:

so the full sixteen-patch looks like this:

where the stippled parts can vary. A look at the list of blocks with
component
`j`

will
confirm that they all have this basic similarity.

I had made a list of the descriptors for each of the the 72 blocks, and from this I extracted a list of the descriptors for interesting component shapes. Then it was only a matter of finding the component descriptors in the block descriptors to know which blocks contained which components; if the two blocks share two different distinctive components, they probably look somewhat similar.

Then I sorted the blocks into groups, where two blocks were in the same group if they shared two distinctive components. The resulting grouping lists, for each block, which other blocks have at least two shapes in common with it. Such blocks do indeed tend to look quite similar.

This strategy was actually the second thing I tried; the first thing didn't work out well. (I forget just what it was, but I think it involved calculating the number of differences between two blocks by comparing their patches pairwise.) I was satisfied enough with this that I considered the project a success and stopped work on it.

The complete final results were:

- This tabulation of blocks that are somewhat similar
- This tabulation of blocks that are distinctly similar (This is
*the*final product; I consider this a sufficiently definitive listing of “similar blocks”.) - This tabulation of blocks that are extremely similar

And these tabulations of all the blocks with various distinctive components: bd bf bfh bfi cd cdd cdf cf cfi ee eg egh egi fgh fh fi gg ggh ggi gh gi j

It may also be interesting to browse the work directory.

Earlier this week I gave a talk about the Curry-Howard
isomorphism. Talks never go quite
the way you expect. The biggest sticking point was my assertion that
there is no function with the type *a* → *b*. I mentioned this as a
throwaway remark on slide 7, assuming that everyone would agree
instantly, and then we got totally hung up on it for about twenty
minutes.

Part of this was my surprise at discovering that most of the audience (members of the Philly Lambda functional programming group) was not familiar with the Haskell type system. I had assumed that most of the members of a functional programming interest group would be familiar with one of Haskell, ML, or Scala, all of which have the same basic type system. But this was not the case. (Many people are primarily interested in Scheme, for example.)

I think the main problem was that I did not make clear to the audience
what Haskell means when it says that a function has type *a* → *b*. At the
talk, and then later on
Reddit
people asked

what about a function that takes an integer and returns a string: doesn't it have type

a→b?

If you know one of the HM languages, you know that of course it
doesn't; it has type `Int → String`

, which is not the same at all. But I
wasn't prepared for this confusion and it took me a while to formulate
the answer. I think I underestimated the degree to which I have
internalized the behavior of Hindley-Milner type systems after twenty
years. Next time, I will be better prepared, and will say something
like the following:

A function which takes an integer and returns a string does not have
the type *a* → *b*; it has the type Int → String. You must pass it an
integer, and you may only use its return value in a place that makes
sense for a string. If *f* has this type, then ```
3 +
f 4
```

is a compile-time type error because Haskell knows that *f*
returns a string, and strings do not work with `+`

.

But if *f* had
the type *a* → *b*, then `3 + f 4`

would be legal, because context requires that
*f* return a number, and the type *a* → *b* says that it *can* return a
number, because a number is an instance of the completely general type
*b*. The type *a* → *b*, in contrast to Int → String, means that *b*
and *a* are completely unconstrained.

Say function *f* had type *a* → *b*. Then you would be able to use the
expression `f x`

in any context that was expecting any sort of return
value; you could write any or all of:

```
3 + f x
head(f x)
"foo" ++ f x
True && f x
```

and they would all type check correctly, regardless of the type of
*x*. In the first line, *f x* would return a number; in the second
line *f* would return a list; in the third line it would return a
string, and in the fourth line it would return a boolean. And in each
case *f* could be able to do what was required regardless of the type
of *x*, so without even looking at *x*. But how could you possibly
write such a function *f*? You can't; it's impossible.

Contrast this with the identity function `id`

, which has type *a* → *a*. This says that `id`

always returns a value whose type is the same as
that if its argument. So you can write

```
3 + id x
```

as long as *x* has the right type for `+`

, and you can write

```
head(id x)
```

as long as `x`

has the
right type for `head`

, and so on. But for *f* to have the type *a* → *b*, all those
would have to work regardless of the type of the argument to *f*. And
there is no way to write such an *f*.

Actually I wonder now if part of the problem is that we like to write
*a* → *b* when what we really mean is the type ∀a.∀b.*a* → *b*. Perhaps making
the quantifiers explicit would clear things up? I suppose it probably
wouldn't have, at least in this case.

The issue is a bit complicated by the fact that the function

```
loop :: a -> b
loop x = loop x
```

*does* have the type *a* → *b*, and, in a language with exceptions, `throw`

has that type also; or consider Haskell

```
foo :: a -> b
foo x = undefined
```

Unfortunately, just as I thought I was getting across the explanation
of why there can be no function with type *a* → *b*, someone brought up
exceptions and I had to mutter and look at my shoes. (You can also take
the view that these functions have type *a* → ⊥, but the logical
principle ⊥ → *b* is unexceptionable.)

In fact, experienced practitioners will realize, the instant the type
*a* → *b* appears, that they have written a function that never returns.
Such an example was directly responsible for my own initial interest
in functional programming and type systems; I read a 1992 paper (“An
anecdote about ML type
inference”)
by Andrew R. Koenig in which he described writing a merge sort
function, whose type was reported (by the SML type inferencer) as ```
[a]
-> [b]
```

, and the reason was that it had a bug that would cause it to
loop forever on any nonempty list. I came back from that conference
convinced that I must learn ML, and *Higher-Order Perl* was a direct
(although distant) outcome of that conviction.

Any discussion of the Curry-Howard isomorphism, using Haskell as an example, is somewhat fraught with trouble, because Haskell's type logic is utterly inconsistent. In addition to the examples above, in Haskell one can write

```
fix :: (a -> a) -> a
fix f = let x = fix f
in f x
```

and as a statement of logic, is patently false. This might be an argument in favor of the Total Functional Programming suggested by D.A. Turner and others.

Here's a Perl quiz that I confidently predict *nobody* will get right.
Without trying it first, what does the following program print?

```
perl -le 'print(two + two == five ? "true" : "false")'
```

[ Summary: I gave a talk Monday night on the Curry-Howard isomorphism; my talk slides are online. ]

I sent several proposals to !!con, a conference of ten-minute talks. One of my proposals was to explain the Curry-Howard isomorphism in ten minutes, but the conference people didn't accept it. They said that they had had four or five proposals for talks about the Curry-Howard isomorphism, and didn't want to accept more than one of them.

The CHI talk they did accept turned out to be very different from the one I had wanted to give; it discussed the Coq theorem-proving system. I had wanted to talk about the basic correspondence between pair types, union types, and function types on the one hand, and reasoning about logical conjunctions, disjunctions, and implications on the other hand, and the !!con speaker didn't touch on this at all.

But mathematical logic and programming language types turn out to be the same! A type in a language like Haskell can be understood as a statement of logic, and the statement will be true if and only if there is actually a value with the corresponding type. Moreover, if you have a proof of the statement, you can convert the proof into a value of the corresponding type, or conversely if you have a value with a certain type you can convert it into a proof of the corresponding statement. The programming language features for constructing or using values with function types correspond exactly to the logical methods for proving or using statements with implications; similarly pair types correspond to logical conjunction, and union tpyes to logical disjunction, and exceptions to logical negation. I think this is incredible. I was amazed the first time I heard of it (Chuck Liang told me sometime around 1993) and I'm still amazed.

Happily Philly Lambda, a Philadelphia-area functional programming group, had recently come back to life, so I suggested that I give them a longer talk about about the Curry-Howard isomorphism, and they agreed.

I gave the talk yesterday, and the materials are online. I'm not sure how easy they will be to understand without my commentary, but it might be worth a try.

If you're interested and want to look into it in more detail, I
suggest you check out Sørensen and Urzyczyn's *Lectures on the
Curry-Howard Isomorphism*. It was published as an expensive
yellow-cover book by Springer, but free copies of the draft are still
available.

A few weeks ago I asked people to predict, without trying it first, what this would print:

```
perl -le 'print(two + two == five ? "true" : "false")'
```

(If you haven't seen this yet, I recommend that you guess, and then test your guess, before reading the rest of this article.)

People familiar with Perl guess that it will print `true`

; that is
what I guessed. The reasoning is as follows: Perl is willing to treat
the unquoted strings `two`

and `five`

as strings, as if they had been
quoted, and is also happy to use the `+`

and `==`

operators on them,
converting the strings to numbers in its usual way. If the strings
had looked like `"2"`

and `"5"`

Perl would have treated them as 2 and
5, but as they don't look like decimal numerals, Perl interprets them
as zeroes. (Perl wants to issue a warning about this, but the warning is not enabled by default.
Since the `two`

and `five`

are treated as
zeroes, the result of the `==`

comparison are true, and the string
`"true"`

should be selected and printed.

So far this is a little bit odd, but not excessively odd; it's the
sort of thing you expect from programming languages, all of which more
or less suck. For example, Python's behavior, although different, is
about equally peculiar. Although Python does require that the strings
`two`

and `five`

be quoted, it is happy to do its own peculiar thing
with `"two" + "two" == "five"`

, which happens to be false: in Python
the `+`

operator is overloaded and has completely different behaviors
on strings and numbers, so that while in Perl `"2" + "2"`

is the
number 4, in Python is it is the string `22`

, and `"two" + "two"`

yields the string `"twotwo"`

. Had the program above actually printed
`true`

, as I expected it would, or even `false`

, I would not have
found it remarkable.

However, this is not what the program does do. The explanation of two paragraphs earlier is totally wrong. Instead, the program prints nothing, and the reason is incredibly convoluted and bizarre.

First, you must know that `print`

has an optional first argument.

In Perl, the `print`

function can be invoked in two ways:

```
print HANDLE A, B, C, ...;
print A, B, C, ...;
```

The former prints out the list `A, B, C, ...`

to the filehandle
`HANDLE`

; the latter uses the default handle, which typically points
at the terminal. How does Perl decide which of these forms is being
used? Specifically, in the second form, how does it know that `A`

is
one of the items to be printed, rather than the name of the filehandle
to print to?

The answer to this question is further complicated by the fact that
the `HANDLE`

in the first form could be either an unquoted string,
which is the name of the handle to print to, or it could be a variable
containing a filehandle value. Both of these `print`

s should do the
same thing:

```
my $handle = \*STDERR;
print STDERR A, B, C;
print $handle A, B, C;
```

Perl's method to decide whether a particular print uses an explicit or
a default handle is a somewhat complicated heuristic. I *think* it is:

- Look at the first component after
`print`

. - If it is either
- A bare (unquoted) string that could be a filehandle name, or
- a simple scalar variable (such as
`$handle`

),

- and it is not followed by a comma
- then it is the optional filehandle

The comma is the important thing: if there's no comma after the first item it is a filehandle; if there is a comma, the first item is the beginning of the output items. The restriction that the filehandle argument must be a bare string or a simple scalar variable is so that the parser doesn't have to search forward to infinity looking for a comma that may or may not be there; it can look forward a couple of tokens and if the comma hasn't appeared yet Perl gives up.

So for example, in

```
print STDERR A, B, C;
```

the `print`

is immediately followed by `STDERR`

, which could be a
filehandle name, and `STDERR`

is not followed by a comma, so `STDERR`

is taken to be the name of the output handle. And in

```
print $x, A, B, C;
```

the `print`

is immediately followed by the simple scalar value `$x`

,
but this `$x`

is followed by a comma, so is considered one of the
things to be printed, and the target of the `print`

is the default
output handle.

In

```
print STDERR, A, B, C;
```

Perl has a puzzle: `STDERR`

looks like a filehandle, but it is
followed by a comma. This is a compile-time error; Perl complains “No
comma allowed after filehandle” and aborts. If you want to print the
literal string `STDERR`

, you must quote it; if you want to print A, B,
and C to the standard error handle, you must omit the first comma.

Now we return the the original example.

```
perl -le 'print(two + two == five ? "true" : "false")'
```

Here Perl sees the unquoted string `two`

which could be a filehandle
name, and which is not followed by a comma. So it takes the first
`two`

to be the output handle name. Then it evaluates the expression

```
+ two == five ? "true" : "false"
```

and obtains the value `true`

. (The leading `+`

is a unary plus
operator, which is a no-op. The bare `two`

and `five`

are taken to be
string constants, which, compared with the numeric `==`

operator, are
considered to be numerically zero, eliciting the same warning that I
mentioned earlier that I had not enabled. Thus the comparison Perl
actually does is is 0 == 0, which is true, and the resulting string is
`true`

.)

This value, the string `true`

, is then printed to the filehandle named
`two`

. Had we previously opened such a filehandle, say with

```
open two, ">", "output-file";
```

then the output would have been sent to the filehandle as usual.
Printing to a non-open filehandle elicits an optional warning from
Perl, but as I mentioned, I have not enabled warnings, so the `print`

silently fails, yielding a false value.

Had I enabled those optional warnings, we would have seen a plethora of them:

```
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "two" may clash with future reserved word at -e line 1.
Unquoted string "five" may clash with future reserved word at -e line 1.
Name "main::two" used only once: possible typo at -e line 1.
Argument "five" isn't numeric in numeric eq (==) at -e line 1.
Argument "two" isn't numeric in numeric eq (==) at -e line 1.
print() on unopened filehandle two at -e line 1.
```

The crucial warning is the one at the end, advising us that the output
of `print`

was directed to the filehandle `two`

which was never opened
for output.

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, written in 1969 by Oulipo
member Georges Perec, entirely without the use of the letter *e*.
Another possibly well-known example is the *Exercises in Style*
of Raymond Queneau, which retells the same vapid anecdote in 99
different styles. The book that Michael put in front of me to sign is
a compendium of anecdotes, examples of Oulipan work, and other
Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large (around five megabytes) that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The cross-references are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, explaining what I had done, how it could be used, and what could be done instead, and the result was a reply from Harry Mathews, another well-known member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

[ Addendum 20140428: Thanks to Philippe Bruhat for some corrections: neither Perec nor Mathews was a founding member of Oulipo. ]

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, written in 1969 by Oulipo
member Georges Perec, entirely without the use of the letter *e*.
Another possibly well-known example is the *Exercises in Style*
of Raymond Queneau, which retells the same vapid anecdote in 99
different styles. The book that Michael put in front of me to sign is
a compendium of anecdotes, examples of Oulipan work, and other
Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large (around five megabytes) that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The crossreferences are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, explaining what I had done, how it could be used, and what could be done instead, and the result was a reply from Harry Mathews, another well-known member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

[ Addendum 20140428: Thanks to Philippe Bruhat for some corrections: neither Perec nor Mathews was a founding member of Oulipo. ]

Last night I gave a talk for the New York Perl Mongers, and got to see a number of people that I like but don't often see. Among these was Michael Fischer, who told me of a story about myself that I had completely forgotten, but I think will be of general interest.

Order Oulipo Compendium with kickback no kickback |

The front end of the story is this: Michael first met me at some conference, shortly after the publication of Higher-Order Perl, and people were coming up to me and presenting me with copies of the book to sign. In many cases these were people who had helped me edit the book, or who had reported printing errors; for some of those people I would find the error in the text that they had reported, circle it, and write a thank-you note on the same page. Michael did not have a copy of my book, but for some reason he had with him a copy of Oulipo Compendium, and he presented this to me to sign instead.

Oulipo is a society of writers, founded in 1960, who pursue
“constrained writing”. Perhaps the best-known example is the
lipogrammatic novel *La Disparition*, which is written entirely
without the use of the letter *e*. It was written in 1969 by Georges
Perec, a founding member of Oulipo. Another possible well-known
example is the *Exercises in Style* of Raymond Queneau. The book
that Michael put in front of me to sign is a compendium of
anecdotes, examples of Oulipan work, and other Oulipalia.

What Michael did not realize, however, was that the gods of fate were
handing me an opportunity. He says that I glared at him for a moment,
then flipped through the pages, *found the place in the book where I
was mentioned*, circled it, and signed that.

The other half of that story is how I happened to be mentioned in
*Oulipo Compendium*.

Back in the early 1990s I did a few text processing projects which would be trivial now, but which were unusual at the time, in a small way. For example, I constructed a concordance of the King James Bible, listing, for each word, the number of every verse in which it appeared. This was a significant effort at the time; the Bible was sufficiently large enough, around five megabytes, that I normally kept the files compressed to save space. This project was surprisingly popular, and I received frequent email from strangers asking for copies of the concordance.

Another project, less popular but still interesting, was an anagram
dictionary. The word list from Webster's Second International
dictionary was available, and it was an easy matter to locate all the
anagrams in it, and compile a file. Unlike the Bible concordance,
which I considered inferior to simply running `grep`

, I still have the
anagram dictionary. It begins:

```
aal ala
aam ama
Aarhus (See `arusha')
Aaronic (See `Nicarao')
Aaronite aeration
Aaru aura
```

And ends:

```
zoosporic sporozoic
zootype ozotype
zyga gazy
zygal glazy
```

The crossreferences are to save space. When two words are anagrams of one another, both are listed in both places. But when three or more words are anagrams, the words are listed in one place, with cross-references in the other places, so for example:

```
Ateles teasel stelae saltee sealet
saltee (See `Ateles')
sealet (See `Ateles')
stelae (See `Ateles')
teasel (See `Ateles')
```

saves 52 characters over the unabbreviated version. Even with this optimization, the complete anagram dictionary was around 750 kilobytes, a significant amount of space in 1991. A few years later I generated an improved version, which dispensed with the abbreviation, by that time unnecessary, and which attempted, sucessfully I thought, to score the anagrams according to interestingness. But I digress.

One day in August of 1994, I received a query about the anagram dictionary, including a question about whether it could be used in a certain way. I replied in detail, and the result was a reply from Harry Mathews, another founding member of the Oulipo, of which I had not heard before. Mr. Mathews, correctly recognizing that I would be interested, explained in detail what he was really after:

A poetic procedure created by the late Georges Perec falls into the latter category. According to this procedure, only the 11 commonest letters in the language can be used, and all have to be used before any of them can be used again. A poem therefore consists of a series of 11 multi-word anagrams of, in French, the letters e s a r t i n u l o c (a c e i l n o r s t). Perec discovered only one one-word anagram for the letter-group, "ulcerations", which was adopted as a generic name for the procedure.

Mathews wanted, not exactly an anagram dictionary, but a list of words acceptable for the English version of "ulcerations". They should contain only the letters a d e h i l n o r s t, at most once each. In particular, he wanted a word containing precisely these eleven letters, to use as the translation of "ulcerations".

Producing the requisite list was much easier then producing the anagram dictionary iself, so I quickly did it and sent it back; it looked like this:

```
a A a
d D d
e E e
h H h
i I i
l L l
n N n
o O o
r R r
s S s
t T t
ad ad da
ae ae ea
ah Ah ah ha
...
lost lost lots slot
nors sorn
nort torn tron
nost snot
orst sort
adehl heald
adehn henad
adehr derah
adehs Hades deash sadhe shade
...
deilnorst nostriled
ehilnorst nosethirl
adehilnort threnodial
adehilnrst disenthral
aehilnorst hortensial
```

The leftmost column is the alphabetical list of letters. This is so that if you find yourself needing to use the letters 'a d e h s' at some point in your poem, you can jump to that part of the list and immediately locate the words containing exactly those letters. (It provides somewhat less help for discovering the shorter words that contain only some of those letters, but there is a limit to how much can be done with static files.)

As can be seen at the end of the list, there were three words that each used ten of the eleven required letters: “hortensial”, “threnodial”, “disenthral”, but none with all eleven. However, Mathews replied:

You have found the solution to my immediate problem: "threnodial" may only have 10 letters, but the 11th letter is "s". So, as an adjectival noun, "threnodials" becomes the one and only generic name for English "Ulcerations". It is not only less harsh a word than the French one but a sorrowfully appropriate one, since the form is naturally associated with Georges Perec, who died 12 years ago at 46 to the lasting consternation of us all.

(A threnody is a hymn of mourning.)

A few years later, the *Oulipo Compendium* appeared, edited by
Mathews, and the article on Threnodials mentions my assistance. And
so it was that when Michael Fischer handed me a copy, I was able to
open it up to the place where I was mentioned.

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to \bot)\to \lnot P$$

Here means an absurdity or a contradiction; means that assuming
leads to absurdity, and means that if assuming
leads to absurdity, then you can conclude that is false. This
is a classic proof by contradiction, and it is intuitionistically
valid. In fact, in many formulations of intuitionistic logic, is
*defined* to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to \bot)\to P$$

This says that if assuming leads to absurdity, you can conclude
that is true. This is *not* intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(\lnot P\to \bot)\to \lnot \lnot P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand
intuitionistic logic as talking about provability instead of truth.
In classical logic, means that is true and
means that is false. If is not false it is true, so and
mean the same thing. But in intuitionistic logic means that is
*provable*, and means that is not provable. means that it is
impossible to prove that is not provable.

If is provable, it is certainly impossible to prove that is not provable. So implies . But just because it is impossible to prove that there is no proof of does not mean that itself is provable, so does not imply .

Similarly,

$$(P\to \bot)\to \lnot P $$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(\lnot P\to \bot)\to P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P\to\bot)\to \lnot P$$

Here means an absurdity or a contradiction; means that assuming
leads to absurdity, and means that if assuming
leads to absurdity, then you can conclude that is false. This
is a classic proof by contradiction, and it is intuitionistically
valid. In fact, in many formulations of intuitionistic logic, is
*defined* to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(\lnot P\to\bot)\to P$$

This says that if assuming leads to absurdity, you can conclude
that is true. This is *not* intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(\lnot P\to\bot)\to \lnot\lnot P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand
intuitionistic logic as talking about provability instead of truth.
In classical logic, P means that P is true and \lnot P
means that P is false. If P is not false it is true, so \lnot\lnot P and P
mean the same thing. But in intuitionistic logic P means that P is
*provable*, and \lnot P means that P is not provable. \lnot\lnot P means that it is
impossible to prove that P is not provable.

If P is provable, it is certainly impossible to prove that P is not provable. So P implies \lnot\lnot P. But just because it is impossible to prove that there is no proof of P does not mean that P itself is provable, so \lnot\lnot P does not imply P.

Similarly,

$$(P\to\bot)\to \lnot P $$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(\lnot P\to\bot)\to P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

My current employer uses an online quiz to pre-screen applicants for open positions. The first question on the quiz is a triviality, just to let the candidate get familiar with the submission and testing system. The question is to write a program that copies standard input to standard output. Candidates are allowed to answer the questions using whatever language they prefer.

Sometimes we get candidates who get a zero score on the test. When I see the report that they failed to answer even the trivial question, my first thought is that this should not reflect badly on the candidate. Clearly, the testing system itself is so hard to use that the candidate was unable to submit even a trivial program, and this is a failure of the testing system and not the candidate.

But it has happened more than once that when I look at the candidate's incomplete submissions I see that the problem, at least this time, is not necessarily in the testing system. There is another possible problem that had not even occurred to me. The candidate failed the trivial question because they tried to write the answer in Java.

I am reminded of Dijkstra's remark that the teaching of BASIC should be rated as a criminal offense. Seeing the hapless candidate get bowled over by a question that should be a mere formality makes me wonder if the same might be said of Java.

I'm not sure. It's possible that this is still a failure of the quiz. It's possible that the Java programmers have valuable skills that we could use, despite their inability to produce even a trivial working program in a short amount of time. I could be persuaded, but right now I have a doubtful feeling.

When you learn Perl, Python, Ruby, or Javascript, one of the things you learn is a body of technique for solving problems using hashes, which are an integral part of the language. When you learn Haskell, you similarly learn a body of technique for solving problems with lazy lists and monads. These kinds of powerful general-purpose tools are at the forefront of the language.

But when you learn Java, there aren't any powerful language features
you can use to solve many problems. Instead, you spend your time
learning a body of technique for solving problems *in the language*.
Java has hashes, but if you are aware of them at all, they are just
another piece of the immense `Collections`

library, lost among the
many other sorts of collections, and you have no particular reason to
know about them or think about them. A good course of Java instruction
might emphasize the more useful parts of the Collections, but since
they're just another part of the library it may not be obvious that
hashes are any more or less useful than, say, `AbstractAction`

or
`zipOutputStream`

.

I was a professional Java programmer for three years (in a different
organization), and I have meant for some time to write up my thoughts
about it. I am often very bitter and sarcastic, and I willingly admit
that I am relentlessly negative and disagreeable, so it can be hard to
tell when I am in earnest about liking something. I once tried to
write a complimentary article about
Blosxom, which has
generated my blog since 2006, and I completely failed; people thought
I was being critical, and I had to write a followup
article to clarify, and
people *still* thought I was dissing Blosxom. Because this article
about Java might be confused with sarcastic criticism, I must state
clearly that everything in this article about Java is in earnest, and
should be taken at face value. Including:

### I really like Java

I am glad to have had the experience of programming in Java. I liked
programming in Java mainly because I found it very relaxing. With a
bad language, like say Fortran or `csh`

, you struggle to do anything
at all, and the language fights with you every step of the way
forward. With a good language there is a different kind of struggle,
to take advantage of the language's strengths, to get the maximum
amount of functionality, and to achieve the clearest possible
expression.

Java is neither a good nor a bad language. It is a mediocre language, and there is no struggle. In Haskell or even in Perl you are always worrying about whether you are doing something in the cleanest and the best way. In Java, you can forget about doing it in the cleanest or the best way, because that is impossible. Whatever you do, however hard you try, the code will come out mediocre, verbose, redundant, and bloated, and the only thing you can do is relax and keep turning the crank until the necessary amount of code has come out of the spout. If it takes ten times as much code as it would to program in Haskell, that is all right, because the IDE will generate half of it for you, and you are still being paid to write the other half.

So you turn the crank, draw your paycheck, and you don't have to worry about the fact that it takes at least twice as long and the design is awful. You can't solve any really hard design problems, but there is a book you can use to solve some of the medium-hard ones, and solving those involves cranking out a lot more Java code, for which you will also be paid. You are a coder, your job is to write code, and you write a lot of code, so you are doing your job and everyone is happy.

You will not produce anything really brilliant, but you will probably
not produce anything too terrible either. The project might fail, but
if it does you can probably put the blame somewhere else. After all,
you produced 576 classes that contain 10,000 lines of Java code, all
of it seemingly essential, so you were doing *your* job. And nobody
can glare at you and demand to know why you used 576 classes when you
should have used 50, because in Java doing it with only 50 classes is
probably impossible.

(Different languages have different failure modes. With Perl, the project might fail because you designed and implemented a pile of shit, but there is a clever workaround for any problem, so you might be able to keep it going long enough to hand it off to someone else, and then when it fails it will be their fault, not yours. With Haskell someone probably should have been fired in the first month for choosing to do it in Haskell.)

So yes, I enjoyed programming in Java, and being relieved of the responsibility for producing a quality product. It was pleasant to not have to worry about whether I was doing a good job, or whether I might be writing something hard to understand or to maintain. The code was ridiculously verbose, of course, but that was not my fault. It was all out of my hands.

So I like Java. But it is not a language I would choose for answering test questions, unless maybe the grade was proportional to the number of lines of code written. On the test, you need to finish quickly, so you need to optimize for brevity and expressiveness. Java is many things, but it is neither brief nor expressive.

When I see that some hapless job candidate struggled for 15 minutes and 14 seconds to write a Java program for copying standard input to standard output, and finally gave up, without even getting to the real questions, it makes me sad that their education, which was probably expensive, has not equipped them with with better tools or to do something other than grind out Java code.

I was able to release a pretty nice piece of software today, courtesy
of my employer, ZipRecruiter. If you have a family of web pages, and
whenever you are looking at one you want to know when someone else is
looking at the same page, you can use my package. The package is
called `2banner`

, because it pops up a banner on a page whenever two
people are looking at it. With permission from ZipRecruiter, I have
put it on github, and you can download
and use it for free.

A typical use case would be a customer service organization. Say your
users create requests for help, and the the customer service reps have
to answer the requests. There is a web page with a list of all the
unserviced requests, and each one has a link to a page with details
about what is requested and how to contact the person who made the
request. But it might sometimes happes that Fred and Mary
independently decide to service the same request, which is at best a
waste of effort, and at worst confusing for the customer who gets
email from both Fred and Mary and doesn't know how to respond. With
`2banner`

, when Mary arrives at the customer's page, she sees a banner
in the corner that says `Fred is already looking at this page!`

, and
at the same time a banner pops up in Fred's browser that says ```
Mary
has started looking at this page!
```

Then Mary knows that Fred is
already on the case, and she can take over a different one, or Fred
and Mary can communicate and decide which of them will deal with this
particular request.

You can similarly trick out the menu page itself, to hide the menu items that someone is already looking out.

I wanted to use someone else's package for this, but I was not able to find one, so I wrote one myself. It was not as hard as I had expected. The system comprises three components:

The back-end database for recording who started looking at which pages and when. I assumed a SQL database and wrote a component that uses Perl's

`DBIx::Class`

module to access it, but it would be pretty easy throw this away and use something else instead.An API server that can propagate notifications like “user

*X*is now looking at page*Y*” and “user*X*is no longer looking at page*Y*” into the database, and which can answer the question “who else is looking at page*Y*right now?”. I used Perl's Catalyst framework for this, because our web app already runs under it. It would not be too hard to throw this away and use something else instead. You could even write a standalone server using`HTTP::Server`

, and borrow most of the existing code, and probably have it working in under an hour.A JavaScript thingy that lives in the web page, sends the appropriate notifications to the API server, and pops up the banner when necessary. I used jQuery for this. Probably there is something else you could use instead, but I have no idea what it is, because I know next to nothing about front-end web programming. I was happy to have the chance to learn a little about jQuery for this project.

Often a project seems easy but the more I think about it the harder it seems. This project was the opposite. I thought it sounded hard, and I didn't want to do it. It had been an outstanding request of the CS people for some time, but I guess everyone else thought it sounded hard too, because nobody did anything about it. But the longer I let it percolate around in my head, the simpler it seemed. As I considered it, one difficulty after another evaporated.

Other than the jQuery stuff, which I had never touched before, the hardest part was deciding what to do about the API server. I could easily have written a standalone, special-purpose API server, but I felt like it was the wrong thing, and anyway, I didn't want to administer one. But eventually I remembered that our main web site is driven by Catalyst, which is itself a system for replying to HTTP requests, which already has access to our database, and which already knows which user is making each request.

So it was natural to say that the API was to send HTTP requests to certain URLs on our web site, and easy-peasy to add a couple of handlers to the existing Catalyst application to handle the API requests, query the database, and send the right replies.

I don't know why it took me so long to think of doing the API server
with Catalyst. If it had been someone else's suggestion I would
probably feel dumb fror not having thought of it myself, because in
hindsight it is *so* obvious. Happily, I did think of it, because it
is clearly the right solution for us.

It was not too hard to debug. The three components are largely independent of one another. The draft version of the API server responded to GET requests, which are easier to generate from the browser than the POST requests that it should be getting. Since the responses are in JSON, it was easy to see if the API server was replying correctly.

I had to invent techniques for debugging the jQuery stuff. I didn't know the right way to get diagnostic messages out of jQuery, so I put a big text area on the web page, and had the jQuery routines write diagnostic messages into it. I don't know if that's what other people do, but I thought it worked pretty well. JavaScript is not my ideal language, but I program in Perl all day, so I am not about to complain. Programming the front end in JavaScript and watching stuff happen on the page is fun! I like writing programs that make things happen.

The package is in ZipRecruiter's GitHub repository, and is available under a very lenient free license. Check it out.

(By the way, I really like working for ZipRecruiter, and we are hiring Perl and Python developers. Please send me email if you want to ask what it is like to work for them.)

(This is a companion piece to my article about
`DateTime::Moonpig`

on the Perl Advent Calendar today. One
of the ways `DateTime::Moonpig`

differs from `DateTime`

is by
defaulting to UTC time instead of to `DateTime`

's "floating" time
zone. This article explains some of the reasons why.)

Perl's `DateTime`

module lets you create time values in a so-called
"floating" time zone. What this means really isn't clear. It would
be coherent for it to mean a time with an unknown or unspecified time
zone, but it isn't treated that way. If it were, you wouldn't be
allowed to compare "floating" times with regular times, or convert
"floating" times to epoch times. If "floating" meant "unspecified
time zone", the computer would have to honestly say that it didn't
know what to do in such cases. But it doesn't.

Unfortunately, this confused notion is the default.

Here are two demonstrations of why I don't like "floating" time zones.

### 1.

The behavior of the `set_time_zone`

method may not be what you were
expecting, but it makes sense and it is useful:

```
my $a = DateTime->new( second => 0,
minute => 0,
hour => 5,
day => 23,
month => 12,
year => 2013,
time_zone => "America/New_York",
);
printf "The time in New York is %s.\n", $a->hms;
$a->set_time_zone("Asia/Seoul");
printf "The time in Seoul is %s.\n", $a->hms;
```

Here we have a time value and we change its time zone from New York to
Seoul. There are at least two reasonable ways to behave here. This
could simply change the time zone, leaving everything else the same,
so that the time changes from 05:00 New York time to 05:00 Seoul time.
*Or* changing the time zone could make other changes to the object so
that it represents the same absolute time as it did before: If I pick
up the phone at 05:00 in New York and call my Mother-in-Law in Seoul,
she answers the call at 19:00 in Seoul, so if I change the
object's time zone from New York to Seoul, it should change from 05:00
to 19:00.

`DateTime`

chooses the second of these: setting the time zone retains
the absolute time stored by the object, so this program prints:

`The time in New York is 05:00:00. The time in Seoul is 19:00:00.`

Very good. And we can get to Seoul by any route we want:

```
$a->set_time_zone("Europe/Berlin");
$a->set_time_zone("Chile/EasterIsland");
$a->set_time_zone("Asia/Seoul");
printf "The time in Seoul is still %s.\n", $a->hms;
```

This prints:

`The time in Seoul is still 19:00:00.`

We can hop all around the globe, but the object always represents 19:00 in Seoul, and when we get back to Seoul it's still 19:00.

But now let's do the same thing with floating time zones:

```
my $b = DateTime->new( second => 0,
minute => 0,
hour => 5,
day => 23,
month => 12,
year => 2013,
time_zone => "America/New_York",
);
printf "The time in New York is %s.\n", $b->hms;
$b->set_time_zone("floating");
$b->set_time_zone("Asia/Seoul");
printf "The time in Seoul is %s.\n", $b->hms;
```

Here we take a hop through the imaginary "floating" time zone. The output is now:

The time in New York is 05:00:00. The time in Seoul is 05:00:00.

The time has changed! I said there were at least two reasonable ways
to behave, and that `set_time_zone`

behaves in the second reasonable
way. Which it does, except that conversions to the "floating" time
zone behave the first reasonable way. Put together, however, they are
unreasonable.

### 2.

```
use DateTime;
sub dec23 {
my ($hour, $zone) = @_;
return DateTime->new( second => 0,
minute => 0,
hour => $hour,
day => 23,
month => 12,
year => 2013,
time_zone => $zone,
);
}
my $a = dec23( 8, "Asia/Seoul" );
my $b = dec23( 6, "America/New_York" );
my $c = dec23( 7, "floating" );
printf "A is %s B\n", $a < $b ? "less than" : "not less than";
printf "B is %s C\n", $b < $c ? "less than" : "not less than";
printf "C is %s A\n", $c < $a ? "less than" : "not less than";
```

With DateTime 1.04, this prints:

A is less than B B is less than C C is less than A

There are non-transitive relations in the world, but comparison of
times is not among them. And if your relation is not transitive, you
have no business binding it to the `<`

operator.

### However...

Rik Signes points out that the manual says:

If you are planning to use any objects with a real time zone, it is strongly recommended that you do not mix these with floating datetimes.

However, while a disclaimer in the manual can document incorrect
behavior, it does not annul it. A bug doesn't stop being a bug just
because you document it in the manual. I think it would have been
possible to implement floating times sanely, but `DateTime`

didn't do
that.

[ Addendum: Rik has now brought to my attention that while the main
`->new`

constructor defaults to the "floating" time zone, the `->now`

method always returns the current time in the UTC zone, which seems to
me to be a mockery of the advice not to mix the two. ]

Intuitionistic logic is deeply misunderstood by people who have not studied it closely; such people often seem to think that the intuitionists were just a bunch of lunatics who rejected the law of the excluded middle for no reason. One often hears that intuitionistic logic rejects proof by contradiction. This is only half true. It arises from a typically classical misunderstanding of intuitionistic logic.

Intuitionists are perfectly happy to accept a reductio ad absurdum proof of the following form:

$$(P→⟂)→ ¬P$$

Here means an absurdity or a contradiction; means that assuming
leads to absurdity, and means that if assuming
leads to absurdity, then you can conclude that is false. This
is a classic proof by contradiction, and it is intuitionistically
valid. In fact, in many formulations of intuitionistic logic, is
*defined* to mean .

What is rejected by intuitionistic logic is the similar-seeming claim that:

$$(¬P→⟂)→ P$$

This says that if assuming leads to absurdity, you can conclude
that is true. This is *not* intuitionistically valid.

This is where people become puzzled if they only know classical logic. “But those are the same thing!” they cry. “You just have to replace with in the first one, and you get the second.”

Not quite. If you replace with in the first one, you do not get the second one; you get:

$$(¬P→⟂)→ ¬¬P$$

People familiar with classical logic are so used to shuffling the signs around and treating the same as that they often don't notice when they are doing it. But in intuitionistic logic, and are not the same. is weaker than , in the sense that from one can always conclude , but not always vice versa. Intuitionistic logic is happy to agree that if leads to absurdity, then . But it does not agree that this is sufficient to conclude .

As is often the case, it may be helpful to try to understand
intuitionistic logic as talking about provability instead of truth.
In classical logic, P means that P is true and ¬P
means that P is false. If P is not false it is true, so ¬¬P and P
mean the same thing. But in intuitionistic logic P means that P is
*provable*, and ¬P means that P is not provable. ¬¬P means that it is
impossible to prove that P is not provable.

If P is provable, it is certainly impossible to prove that P is not provable. So P implies ¬¬P. But just because it is impossible to prove that there is no proof of P does not mean that P itself is provable, so ¬¬P does not imply P.

Similarly,

$$(P→⟂)→ ¬P $$

means that if a proof of would lead to absurdity, then we may conclude that there cannot be a proof of . This is quite valid. But

$$(¬P→⟂)→ P$$

means that if assuming that a proof of is impossible leads to absurdity, there must be a proof of . But this itself isn't a proof of , nor is it enough to prove ; it only shows that there is no proof that proofs of are impossible.

There is a famous mistake of Augustin-Louis Cauchy, in which he is supposed to have "proved" a theorem that is false. I have seen this cited many times, often in very serious scholarly literature, and as often as not Cauchy's purported error is completely misunderstood, and replaced with a different and completely dumbass mistake that nobody could have made.

The claim is often made that Cauchy's *Course d'analyse* of 1821
contains a "proof" of the following statement: a convergent sequence
of continuous functions has a continuous limit. For example, the
Wikipedia article on "uniform
convergence"
claims:

Some historians claim that Augustin Louis Cauchy in 1821 published a false statement, but with a purported proof, that the pointwise limit of a sequence of continuous functions is always continuous…

The nLab article on "Cauchy sum theorem" states:

Non-theorem (attributed to Cauchy, 1821). Let be an infinite sequence of continuous functions from the real line to itself. Suppose that, for every real number , the sequence converges to some (necessarily unique) real number , defining a function ; in other words, the sequence converges pointwise? to . Then is also continuous.

Cauchy never claimed to have proved any such thing, and it beggars belief that Cauchy could have made such a claim, because the counterexamples are so many and so easily located. For example, the sequence on the interval is a sequence of continuous functions that converges everywhere on to a discontinuous limit. You would have to be a mathematical ignoramus to miss this, and Cauchy wasn't.

Another simple example, one that converges everywhere in , is any sequence of functions that are everywhere zero, except that each has a (continuous) bump of height 1 between and . As , the width of the bump narrows to zero, and the limit function is everywhere zero except that . Anyone can think of this, and certainly Cauchy could have. A concrete example of this type is which converges to 0 everywhere except at , where it converges to 1.

Cauchy's controversial theorem is not what Wikipedia or nLab claim.
It is that that the pointwise limit of a convergent **series** of
continuous functions is always continuous. Cauchy is not claiming that
must be continuous if the
limit exists and the are continuous. Rather, he claims that
must be continuous if
the sum converges and the are continuous. This is a
completely different claim. It premise, that the sum converges, is
much stronger, and so the claim itself is much weaker, and so much
more plausible.

Here the counterexamples are not completely trivial. Probably the best-known counterexample is that a square wave (which has a jump discontinuity where the square part begins and ends) can be represented as a Fourier series.

(Cauchy was aware of this too, but it was new mathematics in 1821.
Lakatos and others have argued that the theorem, understood in the way
that continuity was understood in 1821, is not actually erroneous, but
that the idea of continuity has changed since then. One piece of
evidence strongly pointing to this conclusion is that nobody
complained about Cauchy's controversial theorem until 1847. But had
Cauchy somehow, against all probability, mistakenly claimed that a
*sequence* of continuous functions converges to a continuous limit,
you can be sure that it would not have taken the rest of the
mathematical world 26 years to think of the counterexample of
.)

The confusion about Cauchy's controversial theorem arises from a
perennially confusing piece of mathematical terminology: a convergent
*sequence* is not at all the same as a convergent *series*. Cauchy
claimed that a convergent *series* of continuous functions has a
continuous limit. He did not ever claim that a convergent *sequence*
of continuous functions had a continuous limit. But I have often
encountered claims that he did that, even though such such claims are
extremely implausible.

The claim that Cauchy thought a sequence of continuous functions converges to a continuous limit is not only false but is manifestly so. Anyone making it has at best made a silly and careless error, and perhaps doesn't really understand what they are talking about, or hasn't thought about it.

[ I had originally planned to write about this controversial theorem in my series of articles about major screwups in mathematics, but the longer and more closely I looked at it the less clear it was that Cauchy had actually made a mistake. ]

(This post inaugurates a new section on my blog, for incomplete notes. It often happens that I have some idea, usually for software, and I write up a bunch of miscellaneous notes about it, and then never work on it. I'll use this section to post some of those notes, mainly just because I think they might be interesting, but also in the faint hope that someone might get interested and do something with it.)

### Why are simple SQL queries so verbose?

For example:

```
UPDATE batches b
join products p using (product_id)
join clients c using (client_id)
SET b.scheduled_date = NOW()
WHERE b.scheduled_date > NOW()
and b.batch_processor = 'batchco'
and c.login_name = 'mjd' ;
```

(This is 208 characters.)

I guess about two-thirds of this is unavoidable, but those join-using clauses ought to be omittable, or inferrable, or abbreviatable, or something.

`b.batch_processor`

should be abbreviated to at least
`batch_processsor`

, since that's the only field in those three tables
with that name. In fact it could probably be inferred from
`b_p`

. Similarly `c.login_name`

-> `login_name`

-> `log`

or `l_n`

.

```
update batches set sch_d = NOW()
where sch_d > NOW()
and bp = 'batchco'
and cl.ln = 'mjd'
```

(Only 94 characters.)

`cl.ln`

is inferrable: Only two tables begin with `cl`

. None of the
field names in the `client_transaction_view`

table look like
`ln`

. So `cl.ln`

unambiguously means `client.login_name`

.

Then the question arises of how to join the batches to the clients. This is the only really interesting part of this project, and the basic rule is that it shouldn't do anything really clever. There is a graph, which the program can figure out from looking at the foreign key constraints. And the graph should clearly have a short path from batches through products to clients.

`bp`

might be globally ambiguous, but it can be disambiguated by
assuming it's in one of the three tables involved.

If something *is* truly ambiguous, we can issue an intelligent request
for clarification:

```
"bp" is ambiguous. Did you mean:
1. batches.batch_processor
2. batches.bun_predictor
0. None of the above
which? _
```

### Overview

- Debreviate table names
- Figure out required joins and join fields
- Debreviate field names

Can 1 and 2 really be separated? They can in the example above, but maybe not in general.

I think separating 3 and putting it at the end is a good idea: don't
try to use field name abbreviations to disambiguate and debreviate
table names. Only go the other way. But this means that we can't
debreviate `cl`

, since it might be `client_transaction_view`

.

What if something like `cl`

were left as ambiguous after stage 1, and
disambiguated only in stage 3? Then information would be unavailable
to the join resolution, which is the part that I really want to work.

### About abbreviations

Abbreviations for `batch_processor`

:

```
bp
b_p
ba_pr
batch_p
```

There is a tradeoff here: the more different kinds of abbreviations you accept, the more likely there are to be ambiguities.

### About table inference

There could also be a preferences file that lists precedences for
tables and fields: if it lists `clients`

, then anything that could
debreviate to `clients`

or to `client_transaction_view`

automatically debreviates to `clients`

. The first iteration could just
be a list of table names.

### About join inference

Short join paths are preferred to long join paths.

If it takes a long time to generate the join graph, cache it. Build it automatically on the first run, and then rebuild it on request later on.

### More examples

(this section blank)

### Implementation notes

Maybe convert the input to a `SQL::Abstract`

first, then walk the
resulting structure, first debreviating names, then inserting joins,
then debreviating the rest of the names. Then you can output the text
version of the result if you want to.

Note that this requires that the input be valid SQL. Your original idea for the abbreviated SQL began with

```
update set batches.sch_d = NOW()
```

rather than

```
update batches set sch_d = NOW()
```

but the original version would probably be ruled out by this implementation. In this case that is not a big deal, but this choice of implementation might rule out more desirable abbreviations in the future.

Correcting dumb mistakes in the SQL language design might be in Quirky's purview. For example, suppose you do

```
select * from table where (something)
```

### Application notes

RJBS said he would be reluctant to use the abbreviated version of a query in a program. I agree: it would be very foolish to do so, because adding a table or a field might change the meaning of an abbreviated SQL query that was written into a program ten years ago and has worked ever since. This project was never intended to abbreviate queries in program source code.

Quirky is mainly intended for one-off queries. I picture it going into
an improved replacement for the MySQL command-line client. It *might*
also find use in throwaway programs. I also picture a command-line
utility that reads your abbreviated query and prints the debreviated
version for inserting into your program.

### Miscellaneous notes

(In the original document this section was blank. I have added here some notes I made in pen on a printout of the foregoing, on an unknown date.)

Maybe also abbreviate `update`

=> `u`

, `where`

=> `w`

, `and`

=> `&`

. This cuts
the abbreviated query from 94 to 75 characters.

Since debreviation is easier [than join inference] do it first!

Another idea: "`id`

" always means the main table's primary key field.

(This article was previously published at the Perl Advent Calendar on 2013-12-23.)

The DateTime suite is an impressive tour de force, but I hate its interface. The methods it provides are usually not the ones you want, and the things it makes easy are often things that are not useful.

### Mutators

The most obvious example is that it has too many mutators. I believe
that date-time values are a kind of number, and should be treated like
numbers. In particular they should be immutable. Rik Signes has
a hair-raising story
about an accidental mutation that caused a hard to diagnose bug,
because the `add_duration`

method modifies the object on which it is
called, instead of returning a new object.

`DateTime::Duration`

But the most *severe* example, the one that drives me into a rage, is
that the `subtract_datetime`

method returns a `DateTime::Duration`

object,
and this object is never what you want, because it is impossible to
use it usefully.

For example, suppose you would like to know how much time elapses between 1969-04-02 02:38:17 EST and 2013-12-25 21:00:00 EST. You can set up the two DateTime objects for the time, and subtract them using the overloaded minus operator:

```
#!perl
my ($a) = DateTime->new( year => 1969, month => 04, day => 02,
hour => 2, minute => 38, second => 17,
time_zone => "America/New_York" ) ;
my ($b) = DateTime->new( year => 2013, month => 12, day => 25,
hour => 21, minute => 0, second => 0,
time_zone => "America/New_York" ) ;
my $diff = $b - $a;
```

Internally this invokes `subtract_datetime`

to yield a
DateTime::Duration object for the difference. The
DateTime::Duration object `$diff`

will contain the information
that this is a difference of 536 months, 23 days, 1101 minutes, and 43
seconds, a fact which seems to me to be of very limited usefulness.

You might want to know how long this interval is, so you can compare
it to similar intervals. So you might want to know how many seconds
this is. It happens that the two times are exactly 1,411,669,328
seconds apart, but there's no way to get the `$diff`

object to
tell you this.

It *seems* like there are methods that will get you the actual
elapsed time in seconds, but none of them will do it. For example,
`$diff->in_units('seconds')`

looks promising, but will
return 43, which is the 43 seconds left over after you've thrown away
the 536 months, 23 days, and 1101 minutes. I don't know what the use
case for this is supposed to be.

And indeed, no method can tell you how long the duration really is, because the subtraction has thrown away all the information about how long the days and months and years were—days, months and years vary in length—so it simply doesn't know how much time this object actually represents.

Similarly if you want to know how many days there are between the
two dates, the DateTime::Duration object won't tell you because it
can't tell you. If you had the elapsed seconds difference, you could
convert it to the correct number of days simply by dividing by 86400
and rounding off. This works because, even though days vary in
length, they don't vary by much, and the variations cancel out over
the course of a year. If you do this you find that the elapsed number
of days is approximately 16338.7653, which rounds off to 16338 or
16339 depending on how you want to treat the 18-hour time-of-day
difference. This result is not *quite* exact, but the error is on
the order of 0.000002%. So the elapsed seconds are useful, and you
can compute other useful values with them, and get useful answers. In
contrast, DateTime::Duration's answer of "536 months and 23 days"
is *completely* useless because months vary in length by nearly 10%
and DateTime has thrown away the information about how long the
months were. The best you can do to guess the number of days from
this is to multiply the 536 months by 30.4375, which is the average
number of days in a month, and add 23. This is clumsy, and gets you
16337.5 days—which is close, but wrong.

To get what I consider a useful answer out of the DateTime objects
you *must not* use the overloaded subtraction operator; instead you
must do this:

```
#!perl
$b->subtract_datetime_absolute($a)->in_units('seconds')
```

### What's DateTime::Moonpig for?

`DateTime::Moonpig`

attempts to get rid of the part of DateTime I don't like and keep the
part I do like, by changing the interface and leaving the internals
alone. I developed it for the *Moonpig* billing system that Rik
Signes and I did; hence the
name.

DateTime::Moonpig introduces five main changes to the interface of DateTime:

Most of the mutators are gone. They throw fatal exceptions if you try to call them.

The overridden addition and subtraction operators have been changed to eliminate DateTime::Duration entirely. Subtracting two DateTime::Moonpig objects yields the difference in seconds, as an ordinary Perl number. This means that instead of

`#!perl $x = $b->subtract_datetime_absolute($a)->in_units('seconds')`

one can write

`#!perl $x = $b - $a`

From here it's easy to get the approximate number of days difference: just divide by 86400. Similarly, dividing this by 3600 gets the number of hours difference.

An integer number of seconds can be added to or subtracted from a DateTime::Moonpig object; this yields a new object representing a time that is that many seconds later or earlier. Writing

`$date + 2`

is much more convenient than writing`$date->clone->add( seconds => 2 )`

.If you are not concerned with perfect exactness, you can write

`#!perl sub days { $_[0] * 86400 } my $tomorrow = $now + days(1);`

This might be off by an hour if there is an intervening DST change, or by a second if there is an intervening leap second, but in many cases one simply doesn't care.

There is nothing wrong with the way DateTime overloads

`<`

and`>`

, so DateTime::Moonpig leaves those alone.The constructor is extended to accept an epoch time such as is returned by Perl's built-in

`time()`

or`stat()`

functions. This means that one can abbreviate this:`#!perl DateTime->from_epoch( epoch => $epoch )`

to this:

`#!perl DateTime::Moonpig->new( $epoch )`

The default time zone has been changed from DateTime's "floating" time zone to UTC. I think the "floating" time zone is a mistake, and best avoided. It has bad interactions with

`set_time_zone`

, which`DateTime::Moonpig`

does*not*disable, because it is not actually a mutator—unless you use the "floating" time zone. An earlier blog article discusses this.I added a few additional methods I found convenient. For example there is a

`$date->st`

that returns the date and time in the format`YYYY-MM-DD HH:MM::SS`

, which is sometimes handy for quick debugging. (The`st`

is for "string".)

Under the covers, it is all just DateTime objects, which seem to do
what one needs. Other than the mutators, all the many DateTime
methods work just the same; you are even free to use
`->subtract_datetime`

to obtain a DateTime::Duration object if you
enjoy being trapped in an absurdist theatre production.

When I first started this module, I thought it was likely to be a
failed experiment. I expected that the Moonpig::DateTime objects
would break once in a while, or that some operation on them would return
a DateTime instead of a Moonpig::DateTime, which would cause
some later method call to fail. But to my surprise, it worked well.
It has been in regular use in *Moonpig* for several years.

I recently split it out of *Moonpig*, and released it to CPAN. I
will be interested to find out if it works well in other contexts. I
am worried that disabling the mutators has left a gap in functionality
that needs to be filled by something else. I will be interested to
hear reports from people who try.

**Next page**

**»**© All content and copyrights belong to their respective authors.

**«**

**»**© FeedShow - Online RSS Feeds Reader