• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Thursday, 09 Oct 2014 13:55

In days of yore, Yacc and Lex were two of the most useful tools in a Unix hacker’s kit. The way they interfaced to client code was, however, pretty ugly – global variables and magic macros hanging out all over the place. Their modern descendants, Bison and Flex, have preserved that ugliness in order to be backward-compatible.

That rebarbative old interface generally broke a lot of rules about program structure and information hiding that we now accept as givens (to be fair, most of those had barely been invented at the time it was written in 1970 and were still pretty novel). It becomes a particular problem if you want to run multiple instances of your generated parser (or, heaven forfend, multiple parsers with different grammars) in the same binary without having them interfere with each other.

But it can be done. I’m going to describe how because (a) it’s difficult to extract from the documentation, and (b) right now (that is, using Bison 3.0.2 and Flex 2.5.35) the interface is in fact slightly broken and there’s a workaround you need to know.

First, motivation. I had to figure out how to do this for cvs-fast-export, which contains a Bison/Flex grammar that parses CVS master files – often thousands of them in a single run. In the never-ending quest for faster performance (because there are some very old, very gnarly, and very large CVS repositories out there that we would nevertheless like to be able to convert without grinding at them for days or weeks) I have recently been trying to parallelize the parsing stage. The goal is to (a) to be able to spread the job across multiple processors so the work gets done faster, and (b) not to allow I/O waits for some masters being parsed to block compute-intensive operations on others.

In order to make this happen, cvs-fast-export has to manage a bunch of worker threads with a parser instance inside each one. And in order for that to work, the yyparse() and yylex() driver functions in the generated code have to be reentrant. No globals allowed; they have to keep their parsing and lexing state in purely stack- and thread-local storage, and deliver their results back the way ordinary reentrant C functions would do it (that is, through structures referenced by pointer arguments).

Stock Yacc and Lex couldn’t do this. A very long time ago I wrote a workaround – a tool that would hack the code they generated to encapsulate it. That hack is obsolete because (a) nobody uses those heirloom versions any more, and (b) Bison/Flex have built-in support for this. If you read the docs carefully. And it’s partly broken.

Here’s how you start. In your Bison grammar, you need to include include something that begins with these options:

%define api.pure full
%lex-param {yyscan_t scanner}
%parse-param {yyscan_t scanner}

Here, yyscan_t is (in effect) a special private structure used to hold your scanner state. (That’s a slight fib, which I’ll rectify later; it will do for now.)

And your Flex specification must contain these options:

%option reentrant bison-bridge

These are the basics required to make your parser re-entrant. The signatures of the parser and lexer driver functions change from yyparse() and yylex() (no arguments) to these:

yyparse(yyscan_t *scanner)
yylex(YYSTYPE *yylval_param, yyscan_t yyscanner)

A yyscan_t is a private structure used to hold your scanner state; yylval is where yylex() will put its token value when it’s called by yyparse().

You may be puzzled by the fact that the %lex-param declaration says ‘scanner’ but the scanner state argument ends up being ‘yyscanner’. That’s reasonable, I’m a bit puzzled by it myself. In the generated scanner code, if there is a scanner-state argument (forced by %reentrant) it is always the first one and it is always named yyscanner regardless of what the first %lex-param declaration says – that first declaration seems to be a placeholder. In contrast, the first argument name in the %parse-params declaration actually gets used as is.

You must call yyparse() like this:

    yyscan_t myscanner;

    yylex_init(&myscanner);
    yyparse(myscanner);
    yylex_destroy(myscanner);

The yyinit() function call sets scanner to hold the address of a private malloced block holding scanner state; that’s why you have to destroy it explicitly.

The old-style global variables, like yyin, become macros that reference members of the yyscan_t structure; for a more modern look you can use accessor functions instead. For yyin that is the pair yyget_in() and yyset_in().

I hear your question coming. “But, Eric! How do I pass stuff out of the parser?” Good question. My Bison declarations actually look like this:

%define api.pure full
%lex-param {yyscan_t scanner} {cvs_file *cvsfile}
%parse-param {yyscan_t scanner} {cvs_file *cvsfile}

Notice there’s an additional argument. This should change the function signatures to look like this:

yyparse(yyscan_t scanner, cvs_file *cvs)
yylex(YYSTYPE *yylval_param, yyscan_t yyscanner, cvs_file *cvs)

Now you will call yyparse() like this:

    yyscan_t myscanner;

    yylex_init(&myscanner);
    yyparse(myscanner, mycvs);
    yylex_destroy(myscanner);

Le voila! The cvs argument will now be visible to the handler functions you write in your Bison grammar and your Lex specification. You can use it to pass data to the caller – typically, as in my code, the type of the second argument will be a structure of some sort. The documentation insists you can add more curly-brace-wrapped argument declarations to %lex-param and %parse-param to declare multiple extra arguments; I have not tested this.

Anyway, the ‘yyscanner’ argument will be visible to the helper code in your lex specification. The ‘scanner’ argument will be visible, under its right name, in your Bison grammar’s handler code. In both cases this is useful for calling accessors like yyget_in() and yyget_lineno() on. The cvs argument, as noted before, will be visible in both places.

There is, however, one gotcha (and yes, I have filed a bug report about it). Bison should arrange things so that all the %lex-params information is automatically passed to the generated parser and scanner code via the header file Bison generates (which is typically included in the C preambles to your Bison grammar and Lex specification). But it does not.

You have to work around this, until it’s fixed, by defining a YY_DECL macro that expands to the correct prototype and is #included by both generated source code files. When those files are expanded by the C preprocessor, the payload of YY_DECL will be put in the correct places.

Mine, which corresponds to the second set of declarations above, looks like this:

#define YY_DECL int yylex \
	(YYSTYPE * yylval_param, yyscan_t yyscanner, cvs_file *cvs)

There you have it. Reentrancy, proper information hiding – it’s not yer father’s parser generator. For the fully worked example, see the following files in the cvs-fast-export sources: gram.y, lex.l, cvs.h, and import.c.

Mention should be made of Make a reentrant parser with Flex and Bison, another page on this topic. The author describes a different technique requiring an uglier macro hack. I wasn’t able to make it work, but it started me looking in approximately the right direction.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Thursday, 09 Oct 2014 06:59

A bit late, because I’ve been hammering on some code the last several days. But here it is: Time, Clock, and Calendar Programming In C.

Suggestions for 1.1 revisions and improvements will of course be cheerfully accepted. Comments here or email will be fine.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Monday, 06 Oct 2014 22:49

This landed in my mailbox yesterday. I reproduce it verbatim except for the sender’s name.

> Dear authors of the RFC 3092,
>
> I am writing this email on behalf of your Request For Comment “Etymology of
> ‘Foo’.” We are currently learning about the internet organizations that set
> the standards of the internet and our teacher tasked us with finding an RFC
> that was humorous. Me and my two friends have found the “Etymology of
> ‘Foo'” and have found it to be almost as ridiculous as the RFC about
> infinite monkeys; however, we then became quite curious as to why you wrote
> this. Obviously, it is wrote for humor as not everything in life can be
> serious, but did your manager task you to write this? Are you a part of an
> organization in charge of writing humorous RFC’s? Are you getting paid to
> write those? If so, where do you work, and how may we apply? Any comments
> on these inquiries would be greatly appreciated and thank you in advance.
>
> Sincerely,
>
> XXXXXXXXXXXXXX, confused Networking student

I felt as though this seriously demanded a ha-ha-only-serious answer – and next thing you know I was channeling Master Po from the old Kung Fu TV series. Reply follows…


Don may have his own answer, but I have one you may find helpful.

There is a long tradition of writing parody RFCs on April 1st. No
manager tasks us to write these; they arise as a form of folk art
among Internet hackers. I think my personal favorite is still RFC1149
"A Standard for the Transmission of IP Datagrams on Avian Carriers"
from 1 April 1990, universally considered a classic of the joke-RFC
form.

As to why we write these...ah, grasshopper, that is not for us to
explain but for you to experience. If and when you achieve the
hacker-nature, you will understand.

Sadly, odds are Confused Networking Student is too young to get the “grasshopper” reference. (Unless Kung Fu is still in reruns out there, which I wouldn’t know because I basically gave up on TV decades ago.) One hopes the Zen-master schtick will be recognizable anyway.

Update: There is relevant compilation from the show on YouTube.

Author: "Eric Raymond" Tags: "Hacker Culture"
Send by mail Print  Save  Delicious 
Date: Friday, 03 Oct 2014 17:15

In the process of working on my Time, Clock, and Calendar Programming In C document, I have learned something sad but important: the standard Unix calendar API is irremediably broken.

The document list a lot of consequences of the breakage, but here I want to zero in on what I think is the primary causes. That is: the standard struct tm (a) fails to be an unambiguous representation of time, and (b) violates the SPOT (Single Point of Truth) design rule. It has some other more historically contingent problems as well, but these problems (and especially (a)) are the core of its numerous failure modes.

These problems cannot be solved in a backwards-compatible way. I think it’s time for a clean-sheet redesign. In the remainder of this post I’ll develop what I think the premises of the design ought to be, and some consequences.

The functions we are talking about here are tzset(), localtime(3), gmtime(3), mktime(3), strftime(3), and strptime() – everything (ignoring some obsolete entry points) that takes a struct tm argument and/or has timezone issues.

The central problem with this group of functions is the fact that the standard struct tm (what manual pages hilariously call “broken-down-time”) was designed to hold a local time/date without an offset from UTC time. The consequences of this omission cascade through the entire API in unfortunate ways.

Here are the standard members:

struct tm 
{
    int    tm_sec;   /* seconds [0,60] (60 for + leap second) */
    int    tm_min;   /* minutes [0,59] */
    int    tm_hour;  /* hour [0,23] */
    int    tm_mday;  /* day of month [1,31] */
    int    tm_mon ;  /* month of year [0,11] */
    int    tm_year;  /* years since 1900 */
    int    tm_wday;  /* day of week [0,6] (Sunday = 0) */
    int    tm_yday;  /* day of year [0,365] */
    int    tm_isdst; /* daylight saving flag */
};

The presence of the day of year and day of week members violates SPOT. This leads to some strange behaviors – mktime(3) “normalizes” its input structure by fixing up these members. This can produce subtle gotchas.

Also, note that there is no way to represent dates with subsecond precision in this structure. Therefore strftime(3) cannot format them and strptime(3) cannot parse them.

The GNU C library takes a swing at the most serious problem by adding a GMT offset member, but only half-heartedly. Because it is concerned with maintaining backward compatibility, that member is underused.

Here’s what I think it ought to look like instead

struct gregorian 
{
    float  sec;     /* seconds [0,60] (60 for + leap second) */
    int    min;     /* minutes [0,59] */
    int    hour;    /* hour [0,23] */
    int    mday;    /* day of month [1,31] */
    int    mon;     /* month of year [1,12] */
    int    year;    /* years Gregorian */
    int    zoffset; /* zone offset, seconds east of Greenwich */
    char   *zone;   /* zone name or NULL */
    int    dst;     /* daylight saving offset, seconds */
};

Some of you, I know, are looking at the float seconds member and bridling. What about roundoff errors? What about comparisons? Here’s where I introduce another basic premise of the redesign: integral floats are safe to play with..

That wasn’t true when the Unix calendar API was designed, but IEEE754 solved the problem. Most modern FPUs are well-behaved on integral quantities. There is not in fact a fuzziness risk if you stick to integral seconds values.

The other way to handle this – the classic Unix way – would have been to add a decimal subseconds member in some unit, probably nanoseconds in 2014. The problem with this is that it’s not future-proof. Who’s to say we won’t want finer resolution in a century?

Yes, this does means decimal subsecond times will have round-off issues when you do certain kinds of arithmetic on them. I think this is tolerable in calendar dates, where subsecond arithmetic is unusual thing to do to them.

The above structure fixes some quirks and inconsistencies, The silly 1900 offset for years is gone. Time divisions of a day or larger are consistently 1-origin as humans expect; this will reduce problems when writing and reading debug messages. SPOT is restored for the calendar portion of dates.

The zoffset/zone/dst group do not have the SPOT property – zone can be inconsistent with the other two members. This is, alas, unavoidable if we’re going to have a zone member at all, which is pretty much a requirement in order for the analogs of strftime(3) and strptime() to have good behavior.

Now I need to revisit another basic assumption of the Unix time API: that the basic time type is integral seconds since the epoch. In the HOWTO I pointed out that this assumption made sense in a world of 32-bit registers and expensive floating point, but no longer in a world of 64-bit machines and cheap floating point.

So here’s the other basic decision: the time scalar for this library is quad-precision seconds since the epoch in IEEE74 (that is, 112 bits of mantissa).

Now we can begin to sketch some function calls. Here are the basic two:

struct gregorian *unix_to_gregorian(double time, struct gregorian *date, char *zone)

Float seconds since epoch to broken-down time. A NULL zone argument means UTC, not local time. This is important because we want to be able to build a version of this code that doesn’t do lookups through the IANA zone database for embedded applications.

double gregorian_to_unix(struct gregorian *date)

Broken-down time to float seconds. No zone argument because it’s contained in the structure. Actually this function wouldn’t use the zone member but just the zoffset member; this is significant because we want to limit lookups to the timezone database for performance reasons.

struct gregorian *gregorian_to_local(struct gregorian *date, char *zone)

Broken-down time to broken-down time normalized for the specified zone. In this case a null zone just means normalize so there are no out-of-range structure elements (e.g. day 32 wraps to the 1st, 2nd, or 3rd of the next month) without applying any zone change. (Again, this is so the IANA timezone database is not a hard dependency).

Notice that both functions are re-entrant and can take constant arguments.

An auxiliary function we’ll need is:

char *local_timezone(void)

so we can say this:

unix_to_gregorian(time, datebuffer, local_timezone())

We only need two other functions: gregorian_strf() and gregorian_strp(), patterned after strftime() and strptime(). These present no great difficulties. Various strange bugs and glitches in the existing functions would disappear because zone offset and name are part of the structures they operate on.

Am I missing anything here? This seems like it would be a large improvement and not very difficult to write.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Friday, 03 Oct 2014 13:35

I’ve been gifted with a lot of help on my draft of Time, Clock, and Calendar Programming In C. I think it’s almost time to ship 1.0, and plan to do so this weekend. Get your last-minute fixes in now!

I will of course continue to accept corrections and additions after 1.0. Thanks to everyone who contributed. My blog and G+ followers were very diligent in spotting typos, helping fill in and correct standards history, and pointing out the more obscure gotchas in the API.

What I’ve discovered is that the Unix calendar-related API is a pretty wretched shambles. Which leads directly to the topic of my next blog entry…

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Thursday, 02 Oct 2014 17:53

A provocative article at the conservative blog Hot Air comments on a pattern in American coverage of violent interracial crimes. When the perps are white and the victims are black, we can expect the press coverage to be explicit about it, with predictable assumption of racist motivations. On the other hand, when the perps are black and the victims are white, the races of all parties are normally suppressed and no one dares speak the r-word.

If I were a conservative, or a racist, I’d go off on some aggrieved semi-conspiratorial rant here. Instead I’ll observe what Hot Air did not: that the race of violent black criminals is routinely suppressed in news coverage even in the much more common case that their victims are also black. Hot Air is over-focusing here.

That said, Hot Air seems to have a a separate and valid point when it notes that white victims are most likely to have their race suppressed from the reporting when the criminals are black – especially if there was any hint of racist motivation. There is an effective taboo against truthfully reporting incidents in which black criminals yell racial epithets and threats at white victims during the commission of street crimes. If not for webbed security-camera footage we’d have no idea how depressingly common this seems to be – the press certainly won’t cop to it in their print stories.

No conspiracy theory is required to explain the silence here. Reporters and editors are nervous about being thought racist, or (worse) having “anti-racist” pressure groups demonstrating on their doorsteps. The easy route to avoiding this is a bit of suppressio veri – not lying, exactly, but not uttering facts that might be thought racially inflammatory.

The pattern of suppression is neatly explained by the following premises: Any association of black people with criminality is inflammatory. Any suggestion that black criminals are motivated by racism to prey on white victims is super-inflammatory. And above all, we must not inflame. Better to be silent.

I believe this silence is a dangerous mistake with long-term consequences that are bad for everyone, and perhaps worst of all for black people.

Journalistic silence has become a kind of black privilege. The gravamen of the Hot Air article is that gangs of black teenagers and twentysomethings can racially taunt whites and assault both whites and nonwhites confident in the knowledge that media coverage will describe them as neutrally as “youths” (ah, what an anodyne term that is!).

I’m here to say what that article could have but did not: suppressio veri, when performed systematically enough, itself becomes a code that can be read. What the press is teaching Americans to assume, story after story, is that if “youths” commit public violence and they are not specified to be white, or hispanic, or asian — then it’s yet another black street gang on a wilding.

Here is my advice to anti-racists and their media allies: it is in your interests to lift the veil of silence a little. You need to introduce some noise into the correlation – come clean about race in at least some small percentage of these incidents to create reasonable doubt about the implications of silence in others. You do not want your readers trained to assume that “youths” invariably decodes to “thug-life blacks on a casual rampage”. I don’t want this either, and I don’t think anyone should.

I am not mocking or satirizing or being sarcastic when I say this. I don’t like where I think the well-meant suppressio veri is taking us. I think it’s bound to empower people who are genuinely and viciously bigoted by giving them an exclusive on truthful reporting. I don’t think it’s good for anyone of any color for bigots to have that power.

Nor is it any good thing that “youths” now behave as though they think they’re operating with a kind of immunity. We saw this in Ferguson, when Michael Brown apparently believed he could could beat up a Pakistani shopkeeper and then assault a cop without fearing consequences. (“What are you going to do, shoot me?” he sneered, just before he was shot) As he found out, eventually that shit’ll get you killed; it would have been much better for everybody if he hadn’t been encouraged to believe that his skin color gave him a free pass.

I have no doubt that studiously blind press coverage was a significant enabler of that belief. The Michael Browns of the world may not be very bright, but they too can read the code. Every instance of suppressio veri told Brown that if he committed even a violent crime in public a lot of white people in newsrooms and elsewhere would avert their eyes. The press’s attempted canonization of Trayvon Martin only put the cherry on top of this.

It’s not clear to me that this kind of indulgence is any better – even for blacks themselves – than the old racist arrangement in which blacks “knew their place” and were systematically cowed into submission to the law. After all – if it needs pointing out again – the victims of black crime and trash culture are mainly other blacks. Press silence is empowering thugs.

Are we ever going to stop doing this? Anyone looking for a way that the system keeps black people down need look no further than the way we feed them fantasies of victimization and entitlement. For this our press bears much of the blame.

Author: "Eric Raymond" Tags: "Politics, racism"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 12:50

Neil Gaiman writes On Terry Pratchett, he is not a jolly old elf at all.. It’s worth reading.

I know that what Neil Gaiman says here is true, because I’ve known Terry, a little. Not as well as Neil does; we’re not that close, though he has been known to answer my email. But I did have one experience back in 2003 that would have forever dispelled any notion of Terry as a mere jolly elf, assuming I’d been foolish enough to entertain it.

I taught Terry Pratchett how to shoot a pistol.

(We were being co-guests of honor at Penguicon I at the time. This was at the first Penguicon Geeks with Guns event, at a shooting range west of Detroit. It was something Terry had wanted to do for a long time, but opportunities in Britain are quite limited.)

This is actually a very revealing thing to do with anyone. You learn a great deal about how the person handles stress and adrenalin. You learn a lot about their ability to concentrate. If the student has fears about violence, or self-doubt, or masculinity/femininity issues, that stuff is going to tend to come out in the student’s reactions in ways that are not difficult to read.

Terry was rock-steady. He was a good shot from the first three minutes. He listened, he followed directions intelligently, he always played safe, and he developed impressive competence at anything he was shown very quickly. To this day he’s one of the three or four best shooting students I’ve ever had.

That is not the profile of anyone you can safely trivialize as a jolly old elf. I wasn’t inclined to do that anyway; I’d known him on and off since 1991, which was long enough that I believe I got a bit of look-in before he fully developed his Famous Author charm defense.

But it was teaching Terry pistol that brought home to me how natively tough-minded he really is. After that, the realism and courage with which he faced his Alzheimer’s diagnosis came as no surprise to me whatsoever.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 04:14

The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.

Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.

What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 17:45

In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Now, to be fair, the Rule of Technical Greed is a name I just made up. But the underlying pattern is a well-established one from the earliest beginnings of computing.

In the beginning there was assembler. And programming was hard. The semantic gap between how humans think about problems and what we knew how to tell computers to do was vast; our ability to manage complexity was deficient. And in the gap software defects did flourish, multiplying in direct proportion to the size of the programs we wrote.

And the lives of programmers were hard, and the case of their end-users miserable; for, strive as the programmers might, perfection was achieved only in toy programs while in real-world systems the defect rate was nigh-intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the designers and advocates of higher-level languages. And they said: “With these tools we bring you, the semantic gap will lessen, and your ability to write systems of demonstrable correctness will increase. Truly, if we apply this discipline properly to our present programming challenges, shall we achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And compilers were adopted, and for a brief while it seemed that peace and harmony would reign.

But it was not to be. For instead of applying compilers only to the scale of software engineering that had been accustomed in the days of hand-coded assembler, programmers were made to use these tools to design and implement ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of structured programming. And they said: “There is a better way. With some modification of our languages and trained discipline exerted in the use of them, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported structured programming and its discipline came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, sweet birdsong beneath rainbows, etc.

But it was not to be. For instead of applying structured programming only to the scale of software engineering that had been accustomed in the days when poorly-organized spaghetti code was the state of the art, programmers were made to use these tools to design ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of systematic software modularity. And they said: “There is a better way. By systematic separation of concerns and information hiding, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported modularity came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, the lion lie down with the lamb, technical people and marketeers actually get along, etc.

But it was not to be. For instead of applying systematic modularity and information hiding only to the scale of software engineering that had been accustomed in the days of single huge code blobs, programmers were made to use these tools to design ever more complex modularized systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though now greatly improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Are we beginning to see a pattern here? I mean, I could almost write a text macro that would generate the next couple of iterations. Every narrowing of the semantic gap, every advance in our ability to manage software complexity, every improvement in automated verification, is sold to us as a way to push down defect rates. But how each tool actually gets used is to scale up the complexity of design and implementation to the bleeding edge of tolerable defect rates.

This is what I call the Rule of Technical Greed: As our ability to manage software complexity increases, ambition expands so that defect rates and expected levels of technical debt are constant.

The application of this rule to automated verification and proofs of correctness is clear. I have little doubt these will be valuable tools in the relatively near future; I follow developments there with some interest and look forward to using them myself.

But anyone who says “This time it’ll be different!” earns a hearty horse-laugh. Been there, done that, still have the T-shirts. The semantic gap is a stubborn thing; until we become as gods and can will perfect software into existence as an extension of our thoughts, somebody’s still going to have to grovel through the protocol dumps. Design for debuggability will never be waste of effort, because otherwise, even if we believe our tools are perfect, proceeding from ideal specification to flawless implementation…how else will an actual human being actually know?

UPDATE: Having learned that “risk homeostasis” is an actual term of art in road engineering and health risk analysis, I now think this would be better tagged the “Law of Software Risk Homeostasis”.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 15:04

In the wake of the Shellshock bug, I guess I need to repeat in public some things I said at the time of the Heartbleed bug.

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

I’m not handwaving when I say this; we have statistics from places like Coverity that do defect-rate measurements on both open-source and proprietary closed source products, we have academic research like the UMich fuzz papers, we have CVE lists for Internet-exposed programs, we have multiple lines of evidence.

Everything we know tells us that while open source’s security failures may be conspicuous its successes, though invisible, are far larger.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Sunday, 28 Sep 2014 19:19

The patent-troll industry is in full panic over the consequences of the Alice vs. CLS Bank decision. While reading up on the matter, I ran across the following claim by a software patent attorney:

“As Sun Microsystems proved, the quickest way to turn a $5 billion company into a $600 million company is to go open source.”

I’m not going to feed this troll traffic by linking to him, but he’s promulgating a myth that must be dispelled. Trying to go open source didn’t kill Sun; hardware commoditization killed Sun. I know this because I was at ground zero when it killed a company that was aiming to succeed Sun – and, until the dot-com bust, looked about to manage it.

It is certainly the case that the rise of Linux helped put pressure on Sun Microsystems. But the rise of Linux itself was contingent on the plunging prices of the Intel 386 family and the surrounding ecology of support chips. What these did was make it possible to build hardware approaching the capacity of Sun workstations much less expensively.

It was a classic case of technology disruption. As in most such cases, Sun blew it strategically by being unwilling to cannibalize its higher-margin products. There was an i386 port of their operating system before 1990, but it was an orphan within the company. Sun could have pushed it hard and owned the emerging i386 Unix market, slowing down Linux and possibly relegating it to niche plays for a good long time.

Sun didn’t; instead, they did what companies often try in response to these disruptions – they tried to squeeze the last dollar out of their existing designs, then retreated upmarket to where they thought commodity hardware couldn’t reach.

Enter VA Linux, briefly the darling of the tech industry – and where I was on the Board of Directors during the dotcom boom and the bust. VA aimed to be the next Sun, building powerful and inexpensive Sun-class workstations using Linux and commodity 386 hardware.

And, until the dot com bust, VA ate Sun’s lunch in the low and middle range of Sun’s market. Silicon Valley companies queued up to buy VA’s product. There was a running joke in those days that if you wanted to do a startup in the Valley the standard first two steps were (1) raise $15M on Sand Hill Road, and then (2) spend a lot of it buying kit at VA Linux. And everyone was happy until the boom busted.

Two thirds of VA’s customer list went down the tubes within a month. But that’s not what really forced VA out of the hardware business. What really did it was that VA’s hardware value proposition proved as unstable as Sun’s, and for exactly the same reason. Commoditization. By the year 2000 building a Unix box got too easy; there was no magic in the systems integration, anyone could do it.

Had VA stayed in hardware, it would have been in the exact same losing position as Sun – trying to defend a nameplate premium against disruption from below.

So, where was open source in all this? Of course, Linux was a key part in helping VA (and the white-box PC vendors positioned to disrupt VA after 2000) exploit hardware commoditization. By the time Sun tried to open-source its own software the handwriting was already on the wall; giving up proprietary control of their OS couldn’t make their situation any worse.

If anything, OpenSolaris probably staved off the end of Sun by a couple of years by adding value to Sun’s hardware/software combination. Enough people inside Sun understood that open source was a net win to prevail in the political battle.

Note carefully here the distinction between “adding value” and “extracting secrecy rent”. Companies that sell software think they’ve added value when they can collect more secrecy rent, but customers don’t see it that way. To customers, open source adds value precisely because they are less dependent on the vendor. By open-sourcing Solaris, Sun partway closed the value-for-dollar gap with commodity Linux systems.

Open source wasn’t enough. But that doesn’t mean it wasn’t the best move. It was necessary, but not sufficient.

The correct lesson here is “the quickest way to turn a $5 billion company into a $600 million company is to be on the wrong end of a technology disruption and fail to adapt”. In truth, I don’t think anything was going to save Sun in the long term. But I do think that given a willingness to cannibalize their own business and go full-bore on 386 hardware they might have gotten another five to eight years.

Author: "Eric Raymond" Tags: "Technology"
Send by mail Print  Save  Delicious 
Date: Saturday, 27 Sep 2014 14:49

Last night, my wife Cathy and I passed our level 5 test in kuntao. That’s a halfway point to level 10, which is the first “guro” level, roughly equivalent to black belt in a Japanese or Korean art. Ranks aren’t the big deal in kuntao that they are in most Americanized martial arts, but this is still a good point to pause for reflection.

Kuntao is, for those of you new here or who haven’t been paying attention, the martial art my wife and I have been training in for two years this month. It’s a fusion of traditional wing chun kung fu (which is officially now Southern Shaolin, though I retain some doubts about the historical links even after the Shaolin Abbot’s pronouncement) with Phillipine kali and some elements of Renaissance Spanish sword arts.

It’s a demanding style. Only a moderate workout physically, but the techniques require a high level of precision and concentration. Sifu Yeager has some trouble keeping students because of this, but those of us who have hung in there are learning techniques more commercial schools have given up on trying to teach. The knife work alone is more of a toolkit than some other entire styles provide.

Sifu made a bit of a public speech after the test about my having to work to overcome unusual difficulties due to my cerebral palsy. I understand what he was telling the other students and prospective students: if Eric can be good at this and rise to a high skill level you can too, and you should be ashamed if you don’t. He expressed some scorn for former students who quit because the training was too hard, and I said, loudly enough to be heard: “Sifu, I’d be gone if it were too easy.”

It’s true, the challenge level suits me a lot better than strip-mall karate ever could. Why train in a martial art at all if you’re not going to test your limits and break past them? That struggle is as much of the meaning of martial arts as the combat techniques are, and more.

Sifu called me “a fighter”. It’s true, and I free-sparred with some of the senior students testing last night and enjoyed the hell out of every second, and didn’t do half-badly either. But the real fight is always the one for self-mastery, awareness, and control; perfection in the moment, and calm at the heart of furious action. Victory in the outer struggle proceeds from victory in the inner one.

These are no longer strange ideas to Americans after a half-century of Asian martial arts seeping gradually into our folk culture. But they bear repeating nevertheless, lest we forget that the inward way of the warrior is more than a trope for cheesy movies. That cliche functions because there is a powerful truth behind it. It’s a truth I’m reminded of every class, and the reason I keep going back.

Though…I might keep going back for the effect on Cathy. She is thriving in this art in a way she hasn’t under any of the others we’ve studied together. She’s more fit and muscular than she’s ever been in her life – I can feel it when I hold her, and she complains good-naturedly that the new muscle mass is making her clothes fit badly. There are much worse problems for a woman over fifty to have, and we both know that the training is a significant part of the reason people tend to underestimate her age by a helluvalot.

Sifu calls her “the Assassin”. I’m “the Mighty Oak”. Well, it fits; I lack physical flexibility and agility, but I also shrug off hits that would stagger most other people and I punch like a jackhammer when I need to. The contrast between my agile, fluid, fast-on-the-uptake mental style and my physical predisposition to fight like a monster slugger amuses me more than a little. Both are themselves surprising in a man over fifty. The training, I think, is helping me not to slow down.

I have lots of other good reasons that I expect to be training in a martial art until I die, but a sufficient one is this: staying active and challenged, on both physical and mental levels, seems to stave off the degenerative effects of aging as well as anything else humans know how to do. Even though I’m biologically rather younger than my calendar age (thank you, good genes!), I am reaching the span of years at which physical and mental senescence is something I have to be concerned about even though I can’t yet detect any signs of either. And most other forms of exercise bore the shit out of me.

So: another five levels to Guro. Two, perhaps two and half years. The journey doesn’t end there, of course; there are more master levels in kali. The kuntao training doesn’t take us all the way up the traditional-wing-chun skill ladder; I’ll probably do that. Much of the point will be that the skills are fun and valuable in themselves. Part of the point will be having a destination, rather than stopping and waiting to die. Anti-senescence strategy.

It’s of a piece with the fact that I try to learn at least one major technical skill every year, and am shipping software releases almost every week (new project yesterday!) at an age when a lot of engineers would be resting on their laurels. It’s not just that I love my work, it’s that I believe ossifying is a long step towards death and – lacking the biological invincibility of youth – I feel I have to actively seek out ways to keep my brain limber.

My other recreational choices are conditioned by this as well. Strategy gaming is great for it – new games requiring new thought patterns coming out every month. New mountains to climb, always.

I have a hope no previous generation could – that if I can stave off senescence long enough I’ll live to take advantage of serious life-extension technology. When I first started tracking progress in this area thirty years ago my evaluation was that I was right smack on the dividing age for this – people a few years younger than me would almost certainly live to see that, and people a few years older almost certainly would not. Today, with lots of progress and the first clinical trials of antisenescence drugs soon to begin, that still seems to me to be exactly the case.

Lots of bad luck could intervene. There could be a time-bomb in my genes – cancer, heart disease, stroke. That’s no reason not to maximize my odds. Halfway up the mountain; if I keep climbing, the reward could be much more than a few years of healthspan, it could be time to do everything.

Author: "Eric Raymond" Tags: "Martial Arts"
Send by mail Print  Save  Delicious 
Date: Thursday, 25 Sep 2014 20:48

GPSD has a serious bug somewhere in its error modeling. What it effects is position-error estimates GPSD computes for GPSes that don’t compute them internally themselves and report them on the wire. The code produces plausible-looking error estimates, but they lack a symmetry property that they should have to be correct.

I need a couple of hours of help from an applied statistician who can read C and has experience using covariance-matrix methods for error estimation. Direct interest in GPS and geodesy would be a plus.

I don’t think this is a large problem, but it’s just a little beyond my competence. I probably know enough statistics and matrix algebra to understand the fix, but I don’t know enough to find it myself.

Hundreds of millions of Google Maps users might have reason to grateful to anyone who helps out here.

UPDATE: Problem solved, see next post.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Thursday, 25 Sep 2014 20:18

If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.

The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.

This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.

It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.

A programmer’s guide to building parsers with microjson is included in the distribution.

Author: "Eric Raymond" Tags: "Software, GPSD"
Send by mail Print  Save  Delicious 
Date: Tuesday, 23 Sep 2014 13:22

I’ve been blog-silent the last couple of days because I’ve been chasing down the bug I mentioned in Request for help – I need a statistician.

I have since found and fixed it. Thereby hangs a tale, and a cautionary lesson.

Going in, my guess was that the problem was in the covariance-matrix algebra used to compute the DOP (dilution-of-precision) figures from the geometry of the satellite skyview.

(I was originally going to write a longer description than that sentence – but I ruefully concluded that if that sentence was a meaningless noise to you the longer explanation would be too. All you mathematical illiterates out there can feel free to go off and have a life or something.)

My suspicion particularly fell on a function that did partial matrix inversion. Because I only need the diagonal elements of the inverted matrix, the most economical way to compute them seemed to be by minor subdeterminants rather than a whole-matrix method like Gauss-Jordan elimination. My guess was that I’d fucked that up in some fiendishly subtle way.

The one clue I had was a broken symmetry. The results of the computation should be invariant under permutations of the rows of the matrix – or, less abstractly, it shouldn’t matter which order you list the satellites in. But it did.

How did I notice this? Um. I was refactoring some code – actually, refactoring the data structure the skyview was kept in. For hysterical raisins historical reasons the azimuth/elevation and signal-strength figures for the sats had been kept in parallel integer arrays. There was a persistent bad smell about the code that managed these arrays that I thought might be cured if I morphed them into an array of structs, one struct per satellite.

Yeeup, sure enough. I flushed two minor bugs out of cover. Then I rebuilt the interface to the matrix-algebra routines. And the sats got fed to them in a different order than previously. And the regression tests broke loudly, oh shit.

There are already a couple of lessons here. First, have a freakin’ regression test. Had I not I might have sailed on in blissful ignorance that the code was broken.

Second, though “If it ain’t broke, don’t fix it” is generally good advice, it is overridden by this: If you don’t know that it’s broken, but it smells bad, trust your nose and refactor the living hell out of it. Odds are good that something will shake loose and fall on the floor.

This is the point at which I thought I needed a statistician. And I found one – but, I thought, to constrain the problem nicely before I dropped it on him, it would be a good idea to isolate out the suspicious matrix-inversion routine and write a unit test for it. Which I did. And it passed with flying colors.

While it was nice to know I had not actually screwed the pooch in that particular orifice, this left me without a clue where the actual bug was. So I started instrumenting, testing for the point in the computational pipeline where row-symmetry broke down.

Aaand I found it. It was a stupid little subscript error in the function that filled the covariance matrix from the satellite list – k in two places where i should have been. Easy mistake to make, impossible for any of the four static code checkers I use to see, and damnably difficult to spot with the Mark 1 eyeball even if you know that the bug has to be in those six lines somewhere. Particularly because the wrong code didn’t produce crazy numbers; they looked plausible, though the shape of the error volume was distorted.

Now let’s review my mistakes. There were two, a little one and a big one. The little one was making a wrong guess about the nature of the bug and thinking I needed a kind of help I didn’t. But I don’t feel bad about that one; ex ante it was still the most reasonable guess. The highest-complexity code in a computation is generally the most plausible place to suspect a bug, especially when you know you don’t grok the algorithm.

The big mistake was poor test coverage. I should have written a unit test for the specialized matrix inverter when I first coded it – and I should have tested for satellite order invariance.

The general rule here is: to constrain defects as much as possible, never let an invariant go untested.

Author: "Eric Raymond" Tags: "Software, GPSD"
Send by mail Print  Save  Delicious 
Date: Sunday, 21 Sep 2014 11:45

In a blog post on Computational Knowledge and the Future of Pure Mathematics Stephen Wolfram lays out a vision that is in many ways exciting and challenging. What if all of mathematics could be expressed in a common formal notation, stored in computers so it is searchable and amenable to computer-assisted discovery and proof of new theorems?

As a former mathematician who is now a programmer, it is I think inevitable that I have had similar dreams for a very long time; anyone with that common background would imagine broadly the same things. Like Dr. Wolfram, I have thought carefully not merely about the knowledge representation and UI issues in such a project, but also the difficulties in staffing and funding it. So it was with a feeling more of recognition than anything else that I received much of the essay.

To his great credit, Dr. Wolfram has done much – more than anyone else – to bring this vision towards reality. Mathematica and Wolfram Alpha are concrete steps towards it, and far from trivial ones. They show, I think, that the vision is possible and could be achieved with relatively modest funding – less than (say) the budget of a typical summer-blockbuster movie.

But there is one question that looms unanswered in Dr. Wolfram’s call to action. Let us suppose that we think we have all of the world’s mathematics formalized in a huge database of linked theorems and proof sequences, diligently being crawled by search agents and inference engines. In tribute to Wolfram Alpha, let us call this system “Omega”. How, and why, would we trust Omega?

There are at least three levels of possible error in such a system. One would be human error in entering mathematics into it (a true theorem is entered incorrectly). Another would be errors in human mathematics (a false theorem is entered correctly). A third would be errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.

Errors of the first two kinds would eventually be discovered by using inference engines to consistency-check the entire database (unless the assertions in it separate into disconnected cliques, which seems unlikely). It was already clear to me thirty years ago when I first started thinking seriously about this problem that sanity-checking would have to be run as a continuing background process responding to every new mathematical assertion entered: I am sure this requirement has not escaped Dr. Wolfram.

The possible of errors of the third kind – bugs in the inference engine(s) – is more troubling. Such bugs could mask errors of the first two kinds, lead to the generation of incorrect mathematics, and corrupt the database. So we have a difficult verification problem here; we can trust the database (eventually) if we trust the inference engines, but how do we know we can trust the inference engines?

Mathematical thinking cannot solve this problem, because the most likely kind of bug is not a bad inference algorithm but an incorrect implementation of a good one. Notice what has happened here, though; the verification problem for Omega no longer lives in the rarefied realm of pure mathematics but the more concrete province of software engineering.

As such, there are things that experience can teach us. We don’t know how to do perfect software engineering, but we do know what the best practices are. And this is the point at Dr. Wolfram’s proposal to build Omega on Mathematica and Wolfram Alpha begins to be troubling. These are amazing tools, but they’re closed source. They cannot be meaningfully audited for correctness by anyone outside Wolfram Research. Experience teaches us that this is a danger sign, a fragile single point of failure, and simply not tolerable in any project with the ambitions of Omega.

I think Dr. Wolfram is far too intelligent not to understand this, which makes his failure to address the issue the more troubling. For Omega to be trusted, the entire system will need to be transparent top to bottom. The design, the data representations, and the implementation code for its software must all be freely auditable by third-party mathematical topic experts and mathematically literate software engineers.

I would go so far as to say that any mathematician or software engineer asked to participate in this project is ethically required to insist on complete auditability and open source. Otherwise, what has the tradition of peer review and process transparency in science taught us?

I hope that Dr. Wolfram will address this issue in a future blog post. And I hope he understands that, for all his brilliance and impressive accomplishments, “Trust my secret code” will not – and cannot – be an answer that satisfies.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Friday, 12 Sep 2014 16:45

A Call To Duty (David Weber, Timothy Zahn; Baen Books) is a passable extension of Baen Book’s tent-pole Honorverse franchise. Though billed as by David Weber, it resembled almost all of Baen’s double-billed “collaborations” in that most of the actual writing was clearly done by the guy on the second line, with the first line there as a marketing hook.

Zahn has a bit of fun subverting at least one major trope of the subgenre; Travis Long is definitely not the kind of personality one expects as a protagonist. Otherwise all the usual ingredients are present in much the expected combinations. Teenager longing for structure in his life joins the Navy, goes to boot camp, struggles in his first assignment, has something special to contribute when the shit hits the fan. Also, space pirates!

Baen knows its business; there may not be much very original about this, but Honorverse fans will enjoy this book well enough. And for all its cliched quality, it’s more engaging that Zahn’s rather sour last outing, Soulminder, which I previously reviewed.

The knack for careful worldbuilding within a franchise’s canonical constraints that Zahn exhibited in his Star Wars tie-ins is deployed here, where details of the architecture of Honorverse warships become significant plot elements. Also we get a look at Manticore in its very early years, with some characters making the decisions that will grow it into the powerful star kingdom of Honor Harrington’s lifetime.

For these reason, if no others, Honorverse completists will want to read this one too.

Author: "Eric Raymond" Tags: "Review, Science Fiction"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Sep 2014 21:39

Infinite Science Fiction One (edited by Dany G. Zuwen and Joanna Jacksonl Infinite Acacia) starts out rather oddly, with Zuwen’s introducton in which, though he says he’s not religious, he connects his love of SF with having read the Bible as a child. The leap from faith narratives to a literature that celebrates rational knowability seems jarring and a bit implausible.

That said, the selection of stories here is not bad. Higher-profile editors have done worse, sometimes in anthologies I’ve reviewed.

Janka Hobbs’s Real is a dark, affecting little tale of a future in which people who don’t want the mess and bother of real children buy robotic child surrogates, and what happens when a grifter invents a novel scam.

Tim Majors’s By The Numbers is a less successful exploration of the idea of the quantified self – a failure, really, because it contains an impossible oracle-machine in what is clearly intended to be an SF story.

Elizabeth Bannon’s Tin Soul is a sort of counterpoint to Real in which a man’s anti-robot prejudices destroy his ability to relate to his prosthetically-equipped son.

P. Anthony Ramanauskas’s Six Minutes is a prison-break story told from the point of view of a monster, an immortal mind predator who steals the bodies of humans to maintain existence. It’s well written, but diminished by the author’s failure to actually end it and dangling references to a larger setting that we are never shown. Possibly a section from a larger work in progress?

John Walters’s Matchmaker works a familiar theme – the time traveler at a crisis, forbidden to interfere or form attachments – unfortunately, to no other effect than an emotional tone painting. Competent writing does not save it from becoming maudlin and trivial.

Nick Holburn’s The Wedding is a creepy tale of a wedding disrupted by an undead spouse. Not bad on its own terms, but I question what it’s doing in an SF anthology.

Jay Wilburn’s Slow is a gripping tale of an astronaut fighting off being consumed by a symbiote that has at least temporarily saved his life. Definitely SF; not for the squeamish.

Rebecca Ann Jordan’s Gospel Of is strange and gripping. An exile with a bomb strapped to her chest, a future spin on the sacrificed year-king, and a satisfying twist in the ending.

Dan Devine’s The Silent Dead is old-school in the best way – could have been an Astounding story in the 1950s. The mass suicide of a planetary colony has horrifying implications the reader may guess before the ending…

Matthew S. Dent’s Nothing Besides Remains carries forward another old-school tradition – a robot come to sentience yearning for its lost makers. No great surprises here, but a good exploration of the theme.

William Ledbetter’s The Night With Stars is very clever, a sort of anthropological reply to Larry Niven’s classic The Magic Goes Away. What if Stone-Age humans relied on elrctromagnetic features of their environment – and then, due to a shift in the geomagnetic field, lost them? Well done.

Doug Tidwell’s Butterflies is, alas, a textbook example of what not to do in an SF story. At best it’s a trivial finger exercise about an astronaut going mad. There’s no reveal anywhere, and it contradicts the actual facts of history without explanation; no astronaut did this during Kennedy’s term.

Michaele Jordan’s Message of War is a well-executed tale of weapons that can wipe a people from history, and how they might be used. Subtly horrifying even if we are supposed to think of the wielders as the good guys.

Liam Nicolas Pezzano’s Rolling By in the Moonlight starts well, but turns out to be all imagery with no point. The author has an English degree; that figures, this piece smells of literary status envy, a disease the anthology is otherwise largely and blessedly free of.

J.B. Rockwell’s Midnight also starts well and ends badly. An AI on a terminally damaged warship struggling to get its cryopreserved crew launched to somewhere they might live again, that’s a good premise. Too bad it’s wasted on empty sentimentality about cute robots.

This anthology is only about 50% good, but the good stuff is quite original and the less good is mostly just defective SF rather than being anti-SF infected with literary status envy. On balance, better value than some higher-profile anthologies with more pretensions.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Sep 2014 09:11

Collision of Empires (Prit Buttar; Osprey Publishing) is a clear and accessible history that attempts to address a common lack in accounts of the Great War that began a century ago this year: they tend to be centered on the Western Front and the staggering meat-grinder that static trench warfare became as outmoded tactics collided with the reality of machine guns and indirect-fire artillery.

Concentration on the Western Front is understandable in the U.S. and England; the successor states of the Western Front’s victors have maintained good records, and nationals of the English-speaking countries were directly involved there. But in many ways the Eastern Front story is more interesting, especially in the first year that Buttar chooses to cover – less static, and with a sometimes bewilderingly varied cast. And, arguably, larger consequences. The war in the east eventually destroyed three empires and put Lenin’s Communists in power in Russia.

Prit Buttar does a really admirable job of illuminating the thinking of the German, Austrian, and Russian leadership in the run-up to the war – not just at the diplomatic level but in the ways that their militaries were struggling to come to grips with the implications of new technology. The extensive discussion of internecine disputes over military doctrine in the three officer corps involved is better than anything similar I’ve seen elsewhere.

Alas, the author’s gift for lucid exposition falters a bit when it comes to describing actual battles. Ted Raicer did a better job of this in 2010’s Crowns In The Gutter, supported by a lot of rather fine-grained movement maps. Without these, Buttar’s narrative tends to bog down in a confusing mess of similar unit designations and vaguely comic-operatic Russo-German names.

Still, the effort to follow it is worthwhile. Buttar is very clear on the ways that flawed leadership, confused objectives and wishful thinking on all sides engendered a war in which there could be no clear-cut victory short of the utter exhaustion and collapse of one of the alliances.

On the Eastern Front, as on the Western, soldiers fought with remarkable courage for generals and politicians who – even on the victorious side – seriously failed them.

Author: "Eric Raymond" Tags: "Review"
Send by mail Print  Save  Delicious 
Date: Monday, 08 Sep 2014 04:06

The Abyss Beyond Dreams (Peter F. Hamilton, Random House/Del Rey) is a sequel set in the author’s Commonwealth universe, which earlier included one duology (Pandora’s Star, Judas Unchained) and a trilogy (The Dreaming Void, The The Temporal Void, The Evolutionary Void). It brings back one of the major characters (the scientist/leader Nigel Sheldon) on a mission to discover the true nature of the Void at the heart of the Galaxy.

The Void is a pocket universe which threatens to enter an expansion phase that would destroy everything. It is a gigantic artifact of some kind, but neither its builders nor purpose are known. Castaway cultures of humans live inside it, gifted with psionic powers in life and harvested by the enigmatic Skylords in death. And Nigel Sheldon wants to know why.

This is space opera and planetary romance pulled off with almost Hamilton’s usual flair. I say “almost” because the opening sequence, though action-packed, comes off as curiously listless. Nigel Sheldon’s appearance rescues the show, and we are shortly afterwards pitched into an entertaining tale of courage and revolution on a Void world. But things are not as they seem, and the revolutionaries are being manipulated for purposes they cannot guess…

The strongest parts of this book show off Hamilton’s worldbuilding imagination and knack for the telling detail. Yes, we get some insight into what the Void actually is, and an astute reader can guess more. But the final reveal will await the second book of this duology.

Author: "Eric Raymond" Tags: "Review, Science Fiction"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader