One of the loudest criticisms I hear about C++ is that it doesn't have garbage collection. In other words, whenever a program allocates memory, some other part of that program has to figure out when to free it.
Of course, that figuring out can often be automated. As an obvious example, every standard-library container class keeps track of its own memory and frees it as needed. Nevertheless, it is hard to resist the belief that automatically freeing memory would make life a lot simpler for C++ programmers.
When you find that your code's behavior doesn't match the documentation, which one do you change?
There is a school of thought that says that when you set out to write a program, you should write the tests first. Your goal is then to write the simplest program that passes the tests. Once you have done so, you're done.
This approach is certainly appealing. In particular, having automated tests that capture as much as possible of a program's desired behavior is an excellent idea. But what if a program has to have a characteristic that you don't know how to test?
I was just reading an article on the website of a major newspaper that I'd rather not name. I finished the first page, clicked on the second, and got:
Sorry for the interruption.
Hello! You have been chosen to participate in
an important survey from <name deleted>, a
respected 3rd party research company.
This is for RESEARCH PURPOSES ONLY.
We are not selling anything. Your answers are grouped anonymously.
followed by "yes" and "no" buttons.
A strong influence on how I think about programming is a 1971 book by Edsger Dijkstra named A Discipline of Programming. Most of this book is a series of elegant solutions to simple problems, but parts of it are philosophical essays, some of which have stuck with me for the more than 35 years since I've read them.
A well known software design principle is "Divide and conquer"--the notion that by dividing a large problem into smaller problems will yield a solution where the overall problem might be too hard to solve. This strategy is often combined with recursion. For example, one widely used sorting algorithm is to divide the data to be sorted into two approximately equal pieces, sort each piece, and finally merge the two sorted pieces.
However, sometimes the division doesn't have quite the intended effect. Each individual piece of the problem might be solved correctly, but combining the solutions yields a surprise.
We have seen that it's not easy to collect several players into teams that have well-defined strength. Such surprises come up even when we're not dealing with teams. Indeed, they happen even in seemingly straightforward situations such as elections.
Now that we've seen how trying to combine plausible-sounding rules can surprise APL programmers, let's look at a more mundane example: a tournament to find out which of several teams is the strongest. We shall learn that it is hard to avoid surprising results, even in simple cases.
It's hard to study software design for long without coming across the principle of least surprise, sometimes called the principle of least astonishment. Both terms refer to the idea that a system should cause as little surprise as possible for someone who doesn't already know how it behaves.
This idea is a good one most of the time. However, sometimes there is a good and simple reason for behavior that is surprising at first glance. In such cases, following the principle of least surprise may introduce extra complexity into the system and make its behavior more surprising in the long run.
Experienced programmers usually think that warning messages from compilers are a Good Thing. After all, such messages let us know when we have written programs that might not do what we expected, and sometimes save us from hours of debugging.
My friends tell me that I'm good at making things break in surprising ways. Here's an example.
My last post generated a few comments, one of which made a suggestion for what should be the second rule of debugging. I agree with that suggestion, and will elaborate on it later. But there is another rule that is even more important--so important that I see little alternative to calling it the zeroth rule.
If you've been programming for more than, oh, a few minutes, you've discovered that your programs don't always work as you intended.
Most beginners' first reaction when they make such discoveries is to think that there is something wrong with the compiler. After being disabused of that notion, they will reluctantly go looking for the problem.
However, my experience is that many programmers, especially beginners, make a critical mistake when they go on their bug hunts.
To recap: I wanted to upgrade my computer's graphics card. First the computer manufacturer recommended a card that wouldn't fit, then they replaced it with a card that fit but required a power-supply upgrade, and the new power supply didn't fit.
Is the third time the charm?
In Part 1, I tried to upgrade my graphics card, only to be steered wrong by my computer manufacturer's website. The saga continues with my trying to swap the wrong card they had sold me for the right one.