• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Tuesday, 30 Sep 2014 12:50

Neil Gaiman writes On Terry Pratchett, he is not a jolly old elf at all.. It’s worth reading.

I know that what Neil Gaiman says here is true, because I’ve known Terry, a little. Not as well as Neil does; we’re not that close, though he has been known to answer my email. But I did have one experience back in 2003 that would have forever dispelled any notion of Terry as a mere jolly elf, assuming I’d been foolish enough to entertain it.

I taught Terry Pratchett how to shoot a pistol.

(We were being co-guests of honor at Penguicon I at the time. This was at the first Penguicon Geeks with Guns event, at a shooting range west of Detroit. It was something Terry had wanted to do for a long time, but opportunities in Britain are quite limited.)

This is actually a very revealing thing to do with anyone. You learn a great deal about how the person handles stress and adrenalin. You learn a lot about their ability to concentrate. If the student has fears about violence, or self-doubt, or masculinity/femininity issues, that stuff is going to tend to come out in the student’s reactions in ways that are not difficult to read.

Terry was rock-steady. He was a good shot from the first three minutes. He listened, he followed directions intelligently, he always played safe, and he developed impressive competence at anything he was shown very quickly. To this day he’s one of the three or four best shooting students I’ve ever had.

That is not the profile of anyone you can safely trivialize as a jolly old elf. I wasn’t inclined to do that anyway; I’d known him on and off since 1991, which was long enough that I believe I got a bit of look-in before he fully developed his Famous Author charm defense.

But it was teaching Terry pistol that brought home to me how natively tough-minded he really is. After that, the realism and courage with which he faced his Alzheimer’s diagnosis came as no surprise to me whatsoever.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Tuesday, 30 Sep 2014 04:14

The C/UNIX library support for time and calendar programming is a nasty mess of historical contingency. I have grown tired of having to re-learn its quirks every time I’ve had to deal with it, so I’m doing something about that.

Announcing Time, Clock, and Calendar Programming In C, a document which attempts to chart the historical clutter (so you can ignore it once you know why it’s there) and explain the mysteries.

What I’ve released is an 0.9 beta version. My hope is that it will rapidly attract some thoroughgoing reviews so I can release a 1.0 in a week or so. More than that, I would welcome a subject matter expert as a collaborator.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 17:45

In a recent discussion on G+, a friend of mine made a conservative argument for textual over binary interchange protocols on the grounds that programs always need to be debugged, and thus readability of the protocol streams by humans trumps the minor efficiency gains from binary packing.

I agree with this argument; I’ve made it often enough myself, notably in The Art of Unix Programming. But it was something his opponent said that nudged at me. “Provable programs are the future,” he declaimed, pointing at sel4 and CompCert as recent examples of formal verification of real-world software systems. His implication was clear: we’re soon going to get so much better at turning specifications into provably correct implementations that debuggability will soon cease to be a strong argument for protocols that can be parsed by a Mark I Eyeball.

Oh foolish, foolish child, that wots not of the Rule of Technical Greed.

Now, to be fair, the Rule of Technical Greed is a name I just made up. But the underlying pattern is a well-established one from the earliest beginnings of computing.

In the beginning there was assembler. And programming was hard. The semantic gap between how humans think about problems and what we knew how to tell computers to do was vast; our ability to manage complexity was deficient. And in the gap software defects did flourish, multiplying in direct proportion to the size of the programs we wrote.

And the lives of programmers were hard, and the case of their end-users miserable; for, strive as the programmers might, perfection was achieved only in toy programs while in real-world systems the defect rate was nigh-intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the designers and advocates of higher-level languages. And they said: “With these tools we bring you, the semantic gap will lessen, and your ability to write systems of demonstrable correctness will increase. Truly, if we apply this discipline properly to our present programming challenges, shall we achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And compilers were adopted, and for a brief while it seemed that peace and harmony would reign.

But it was not to be. For instead of applying compilers only to the scale of software engineering that had been accustomed in the days of hand-coded assembler, programmers were made to use these tools to design and implement ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of structured programming. And they said: “There is a better way. With some modification of our languages and trained discipline exerted in the use of them, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported structured programming and its discipline came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, sweet birdsong beneath rainbows, etc.

But it was not to be. For instead of applying structured programming only to the scale of software engineering that had been accustomed in the days when poorly-organized spaghetti code was the state of the art, programmers were made to use these tools to design ever more complex systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Then, lo, there appeared the advocates of systematic software modularity. And they said: “There is a better way. By systematic separation of concerns and information hiding, we can achieve the Nirvana of defect rates tending asymptotically towards zero!”

Great was the rejoicing at this prospect, and swiftly resolved the debate despite a few curmudgeons who muttered that it would all end in tears. And languages which supported modularity came to be widely adopted, and these did indeed have a strong positive effect on defect rates. Once again it seemed that peace and harmony might prevail, the lion lie down with the lamb, technical people and marketeers actually get along, etc.

But it was not to be. For instead of applying systematic modularity and information hiding only to the scale of software engineering that had been accustomed in the days of single huge code blobs, programmers were made to use these tools to design ever more complex modularized systems. The semantic gap, though less vast than it had been, remained large; our ability to manage complexity, though now greatly improved, was not what it could be. Commercial and reputational victory oft went to those most willing to accrue technical debt. Defect rates rose once again to just shy of intolerable. And there was much wailing and gnashing of teeth.

Are we beginning to see a pattern here? I mean, I could almost write a text macro that would generate the next couple of iterations. Every narrowing of the semantic gap, every advance in our ability to manage software complexity, every improvement in automated verification, is sold to us as a way to push down defect rates. But how each tool actually gets used is to scale up the complexity of design and implementation to the bleeding edge of tolerable defect rates.

This is what I call the Rule of Technical Greed: As our ability to manage software complexity increases, ambition expands so that defect rates and expected levels of technical debt are constant.

The application of this rule to automated verification and proofs of correctness is clear. I have little doubt these will be valuable tools in the relatively near future; I follow developments there with some interest and look forward to using them myself.

But anyone who says “This time it’ll be different!” earns a hearty horse-laugh. Been there, done that, still have the T-shirts. The semantic gap is a stubborn thing; until we become as gods and can will perfect software into existence as an extension of our thoughts, somebody’s still going to have to grovel through the protocol dumps. Design for debuggability will never be waste of effort, because otherwise, even if we believe our tools are perfect, proceeding from ideal specification to flawless implementation…how else will an actual human being actually know?

UPDATE: Having learned that “risk homeostasis” is an actual term of art in road engineering and health risk analysis, I now think this would be better tagged the “Law of Software Risk Homeostasis”.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Monday, 29 Sep 2014 15:04

In the wake of the Shellshock bug, I guess I need to repeat in public some things I said at the time of the Heartbleed bug.

The first thing to notice here is that these bugs were found – and were findable – because of open-source scrutiny.

There’s a “things seen versus things unseen” fallacy here that gives bugs like Heartbleed and Shellshock false prominence. We don’t know – and can’t know – how many far worse exploits lurk in proprietary code known only to crackers or the NSA.

What we can project based on other measures of differential defect rates suggests that, however imperfect “many eyeballs” scrutiny is, “few eyeballs” or “no eyeballs” is far worse.

I’m not handwaving when I say this; we have statistics from places like Coverity that do defect-rate measurements on both open-source and proprietary closed source products, we have academic research like the UMich fuzz papers, we have CVE lists for Internet-exposed programs, we have multiple lines of evidence.

Everything we know tells us that while open source’s security failures may be conspicuous its successes, though invisible, are far larger.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Sunday, 28 Sep 2014 19:19

The patent-troll industry is in full panic over the consequences of the Alice vs. CLS Bank decision. While reading up on the matter, I ran across the following claim by a software patent attorney:

“As Sun Microsystems proved, the quickest way to turn a $5 billion company into a $600 million company is to go open source.”

I’m not going to feed this troll traffic by linking to him, but he’s promulgating a myth that must be dispelled. Trying to go open source didn’t kill Sun; hardware commoditization killed Sun. I know this because I was at ground zero when it killed a company that was aiming to succeed Sun – and, until the dot-com bust, looked about to manage it.

It is certainly the case that the rise of Linux helped put pressure on Sun Microsystems. But the rise of Linux itself was contingent on the plunging prices of the Intel 386 family and the surrounding ecology of support chips. What these did was make it possible to build hardware approaching the capacity of Sun workstations much less expensively.

It was a classic case of technology disruption. As in most such cases, Sun blew it strategically by being unwilling to cannibalize its higher-margin products. There was an i386 port of their operating system before 1990, but it was an orphan within the company. Sun could have pushed it hard and owned the emerging i386 Unix market, slowing down Linux and possibly relegating it to niche plays for a good long time.

Sun didn’t; instead, they did what companies often try in response to these disruptions – they tried to squeeze the last dollar out of their existing designs, then retreated upmarket to where they thought commodity hardware couldn’t reach.

Enter VA Linux, briefly the darling of the tech industry – and where I was on the Board of Directors during the dotcom boom and the bust. VA aimed to be the next Sun, building powerful and inexpensive Sun-class workstations using Linux and commodity 386 hardware.

And, until the dot com bust, VA ate Sun’s lunch in the low and middle range of Sun’s market. Silicon Valley companies queued up to buy VA’s product. There was a running joke in those days that if you wanted to do a startup in the Valley the standard first two steps were (1) raise $15M on Sand Hill Road, and then (2) spend a lot of it buying kit at VA Linux. And everyone was happy until the boom busted.

Two thirds of VA’s customer list went down the tubes within a month. But that’s not what really forced VA out of the hardware business. What really did it was that VA’s hardware value proposition proved as unstable as Sun’s, and for exactly the same reason. Commoditization. By the year 2000 building a Unix box got too easy; there was no magic in the systems integration, anyone could do it.

Had VA stayed in hardware, it would have been in the exact same losing position as Sun – trying to defend a nameplate premium against disruption from below.

So, where was open source in all this? Of course, Linux was a key part in helping VA (and the white-box PC vendors positioned to disrupt VA after 2000) exploit hardware commoditization. By the time Sun tried to open-source its own software the handwriting was already on the wall; giving up proprietary control of their OS couldn’t make their situation any worse.

If anything, OpenSolaris probably staved off the end of Sun by a couple of years by adding value to Sun’s hardware/software combination. Enough people inside Sun understood that open source was a net win to prevail in the political battle.

Note carefully here the distinction between “adding value” and “extracting secrecy rent”. Companies that sell software think they’ve added value when they can collect more secrecy rent, but customers don’t see it that way. To customers, open source adds value precisely because they are less dependent on the vendor. By open-sourcing Solaris, Sun partway closed the value-for-dollar gap with commodity Linux systems.

Open source wasn’t enough. But that doesn’t mean it wasn’t the best move. It was necessary, but not sufficient.

The correct lesson here is “the quickest way to turn a $5 billion company into a $600 million company is to be on the wrong end of a technology disruption and fail to adapt”. In truth, I don’t think anything was going to save Sun in the long term. But I do think that given a willingness to cannibalize their own business and go full-bore on 386 hardware they might have gotten another five to eight years.

Author: "Eric Raymond" Tags: "Technology"
Send by mail Print  Save  Delicious 
Date: Saturday, 27 Sep 2014 14:49

Last night, my wife Cathy and I passed our level 5 test in kuntao. That’s a halfway point to level 10, which is the first “guro” level, roughly equivalent to black belt in a Japanese or Korean art. Ranks aren’t the big deal in kuntao that they are in most Americanized martial arts, but this is still a good point to pause for reflection.

Kuntao is, for those of you new here or who haven’t been paying attention, the martial art my wife and I have been training in for two years this month. It’s a fusion of traditional wing chun kung fu (which is officially now Southern Shaolin, though I retain some doubts about the historical links even after the Shaolin Abbot’s pronouncement) with Phillipine kali and some elements of Renaissance Spanish sword arts.

It’s a demanding style. Only a moderate workout physically, but the techniques require a high level of precision and concentration. Sifu Yeager has some trouble keeping students because of this, but those of us who have hung in there are learning techniques more commercial schools have given up on trying to teach. The knife work alone is more of a toolkit than some other entire styles provide.

Sifu made a bit of a public speech after the test about my having to work to overcome unusual difficulties due to my cerebral palsy. I understand what he was telling the other students and prospective students: if Eric can be good at this and rise to a high skill level you can too, and you should be ashamed if you don’t. He expressed some scorn for former students who quit because the training was too hard, and I said, loudly enough to be heard: “Sifu, I’d be gone if it were too easy.”

It’s true, the challenge level suits me a lot better than strip-mall karate ever could. Why train in a martial art at all if you’re not going to test your limits and break past them? That struggle is as much of the meaning of martial arts as the combat techniques are, and more.

Sifu called me “a fighter”. It’s true, and I free-sparred with some of the senior students testing last night and enjoyed the hell out of every second, and didn’t do half-badly either. But the real fight is always the one for self-mastery, awareness, and control; perfection in the moment, and calm at the heart of furious action. Victory in the outer struggle proceeds from victory in the inner one.

These are no longer strange ideas to Americans after a half-century of Asian martial arts seeping gradually into our folk culture. But they bear repeating nevertheless, lest we forget that the inward way of the warrior is more than a trope for cheesy movies. That cliche functions because there is a powerful truth behind it. It’s a truth I’m reminded of every class, and the reason I keep going back.

Though…I might keep going back for the effect on Cathy. She is thriving in this art in a way she hasn’t under any of the others we’ve studied together. She’s more fit and muscular than she’s ever been in her life – I can feel it when I hold her, and she complains good-naturedly that the new muscle mass is making her clothes fit badly. There are much worse problems for a woman over fifty to have, and we both know that the training is a significant part of the reason people tend to underestimate her age by a helluvalot.

Sifu calls her “the Assassin”. I’m “the Mighty Oak”. Well, it fits; I lack physical flexibility and agility, but I also shrug off hits that would stagger most other people and I punch like a jackhammer when I need to. The contrast between my agile, fluid, fast-on-the-uptake mental style and my physical predisposition to fight like a monster slugger amuses me more than a little. Both are themselves surprising in a man over fifty. The training, I think, is helping me not to slow down.

I have lots of other good reasons that I expect to be training in a martial art until I die, but a sufficient one is this: staying active and challenged, on both physical and mental levels, seems to stave off the degenerative effects of aging as well as anything else humans know how to do. Even though I’m biologically rather younger than my calendar age (thank you, good genes!), I am reaching the span of years at which physical and mental senescence is something I have to be concerned about even though I can’t yet detect any signs of either. And most other forms of exercise bore the shit out of me.

So: another five levels to Guro. Two, perhaps two and half years. The journey doesn’t end there, of course; there are more master levels in kali. The kuntao training doesn’t take us all the way up the traditional-wing-chun skill ladder; I’ll probably do that. Much of the point will be that the skills are fun and valuable in themselves. Part of the point will be having a destination, rather than stopping and waiting to die. Anti-senescence strategy.

It’s of a piece with the fact that I try to learn at least one major technical skill every year, and am shipping software releases almost every week (new project yesterday!) at an age when a lot of engineers would be resting on their laurels. It’s not just that I love my work, it’s that I believe ossifying is a long step towards death and – lacking the biological invincibility of youth – I feel I have to actively seek out ways to keep my brain limber.

My other recreational choices are conditioned by this as well. Strategy gaming is great for it – new games requiring new thought patterns coming out every month. New mountains to climb, always.

I have a hope no previous generation could – that if I can stave off senescence long enough I’ll live to take advantage of serious life-extension technology. When I first started tracking progress in this area thirty years ago my evaluation was that I was right smack on the dividing age for this – people a few years younger than me would almost certainly live to see that, and people a few years older almost certainly would not. Today, with lots of progress and the first clinical trials of antisenescence drugs soon to begin, that still seems to me to be exactly the case.

Lots of bad luck could intervene. There could be a time-bomb in my genes – cancer, heart disease, stroke. That’s no reason not to maximize my odds. Halfway up the mountain; if I keep climbing, the reward could be much more than a few years of healthspan, it could be time to do everything.

Author: "Eric Raymond" Tags: "Martial Arts"
Send by mail Print  Save  Delicious 
Date: Thursday, 25 Sep 2014 20:48

GPSD has a serious bug somewhere in its error modeling. What it effects is position-error estimates GPSD computes for GPSes that don’t compute them internally themselves and report them on the wire. The code produces plausible-looking error estimates, but they lack a symmetry property that they should have to be correct.

I need a couple of hours of help from an applied statistician who can read C and has experience using covariance-matrix methods for error estimation. Direct interest in GPS and geodesy would be a plus.

I don’t think this is a large problem, but it’s just a little beyond my competence. I probably know enough statistics and matrix algebra to understand the fix, but I don’t know enough to find it myself.

Hundreds of millions of Google Maps users might have reason to grateful to anyone who helps out here.

UPDATE: Problem solved, see next post.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Thursday, 25 Sep 2014 20:18

If you’ve ever wanted a JSON parser that can unpack directly to fixed-extent C storage (look, ma, no malloc!) I’ve got the code for you.

The microjson parser is tiny (less than 700LOC), fast, and very sparing of memory. It is suitable for use in small-memory embedded environments and deployments where malloc() is forbidden in order to prevent leaked-memory issues.

This project is a spin-out of code used heavily in GPSD; thus, the code has been tested on dozens of different platforms in hundreds of millions of deployments.

It has two restrictions relative to standard JSON: the special JSON “null” value is not handled, and object array elements must be homogenous in type.

A programmer’s guide to building parsers with microjson is included in the distribution.

Author: "Eric Raymond" Tags: "Software, GPSD"
Send by mail Print  Save  Delicious 
Date: Tuesday, 23 Sep 2014 13:22

I’ve been blog-silent the last couple of days because I’ve been chasing down the bug I mentioned in Request for help – I need a statistician.

I have since found and fixed it. Thereby hangs a tale, and a cautionary lesson.

Going in, my guess was that the problem was in the covariance-matrix algebra used to compute the DOP (dilution-of-precision) figures from the geometry of the satellite skyview.

(I was originally going to write a longer description than that sentence – but I ruefully concluded that if that sentence was a meaningless noise to you the longer explanation would be too. All you mathematical illiterates out there can feel free to go off and have a life or something.)

My suspicion particularly fell on a function that did partial matrix inversion. Because I only need the diagonal elements of the inverted matrix, the most economical way to compute them seemed to be by minor subdeterminants rather than a whole-matrix method like Gauss-Jordan elimination. My guess was that I’d fucked that up in some fiendishly subtle way.

The one clue I had was a broken symmetry. The results of the computation should be invariant under permutations of the rows of the matrix – or, less abstractly, it shouldn’t matter which order you list the satellites in. But it did.

How did I notice this? Um. I was refactoring some code – actually, refactoring the data structure the skyview was kept in. For hysterical raisins historical reasons the azimuth/elevation and signal-strength figures for the sats had been kept in parallel integer arrays. There was a persistent bad smell about the code that managed these arrays that I thought might be cured if I morphed them into an array of structs, one struct per satellite.

Yeeup, sure enough. I flushed two minor bugs out of cover. Then I rebuilt the interface to the matrix-algebra routines. And the sats got fed to them in a different order than previously. And the regression tests broke loudly, oh shit.

There are already a couple of lessons here. First, have a freakin’ regression test. Had I not I might have sailed on in blissful ignorance that the code was broken.

Second, though “If it ain’t broke, don’t fix it” is generally good advice, it is overridden by this: If you don’t know that it’s broken, but it smells bad, trust your nose and refactor the living hell out of it. Odds are good that something will shake loose and fall on the floor.

This is the point at which I thought I needed a statistician. And I found one – but, I thought, to constrain the problem nicely before I dropped it on him, it would be a good idea to isolate out the suspicious matrix-inversion routine and write a unit test for it. Which I did. And it passed with flying colors.

While it was nice to know I had not actually screwed the pooch in that particular orifice, this left me without a clue where the actual bug was. So I started instrumenting, testing for the point in the computational pipeline where row-symmetry broke down.

Aaand I found it. It was a stupid little subscript error in the function that filled the covariance matrix from the satellite list – k in two places where i should have been. Easy mistake to make, impossible for any of the four static code checkers I use to see, and damnably difficult to spot with the Mark 1 eyeball even if you know that the bug has to be in those six lines somewhere. Particularly because the wrong code didn’t produce crazy numbers; they looked plausible, though the shape of the error volume was distorted.

Now let’s review my mistakes. There were two, a little one and a big one. The little one was making a wrong guess about the nature of the bug and thinking I needed a kind of help I didn’t. But I don’t feel bad about that one; ex ante it was still the most reasonable guess. The highest-complexity code in a computation is generally the most plausible place to suspect a bug, especially when you know you don’t grok the algorithm.

The big mistake was poor test coverage. I should have written a unit test for the specialized matrix inverter when I first coded it – and I should have tested for satellite order invariance.

The general rule here is: to constrain defects as much as possible, never let an invariant go untested.

Author: "Eric Raymond" Tags: "Software, GPSD"
Send by mail Print  Save  Delicious 
Date: Sunday, 21 Sep 2014 11:45

In a blog post on Computational Knowledge and the Future of Pure Mathematics Stephen Wolfram lays out a vision that is in many ways exciting and challenging. What if all of mathematics could be expressed in a common formal notation, stored in computers so it is searchable and amenable to computer-assisted discovery and proof of new theorems?

As a former mathematician who is now a programmer, it is I think inevitable that I have had similar dreams for a very long time; anyone with that common background would imagine broadly the same things. Like Dr. Wolfram, I have thought carefully not merely about the knowledge representation and UI issues in such a project, but also the difficulties in staffing and funding it. So it was with a feeling more of recognition than anything else that I received much of the essay.

To his great credit, Dr. Wolfram has done much – more than anyone else – to bring this vision towards reality. Mathematica and Wolfram Alpha are concrete steps towards it, and far from trivial ones. They show, I think, that the vision is possible and could be achieved with relatively modest funding – less than (say) the budget of a typical summer-blockbuster movie.

But there is one question that looms unanswered in Dr. Wolfram’s call to action. Let us suppose that we think we have all of the world’s mathematics formalized in a huge database of linked theorems and proof sequences, diligently being crawled by search agents and inference engines. In tribute to Wolfram Alpha, let us call this system “Omega”. How, and why, would we trust Omega?

There are at least three levels of possible error in such a system. One would be human error in entering mathematics into it (a true theorem is entered incorrectly). Another would be errors in human mathematics (a false theorem is entered correctly). A third would be errors in the search and inference engines used to trawl the database and generate new proofs to be added to it.

Errors of the first two kinds would eventually be discovered by using inference engines to consistency-check the entire database (unless the assertions in it separate into disconnected cliques, which seems unlikely). It was already clear to me thirty years ago when I first started thinking seriously about this problem that sanity-checking would have to be run as a continuing background process responding to every new mathematical assertion entered: I am sure this requirement has not escaped Dr. Wolfram.

The possible of errors of the third kind – bugs in the inference engine(s) – is more troubling. Such bugs could mask errors of the first two kinds, lead to the generation of incorrect mathematics, and corrupt the database. So we have a difficult verification problem here; we can trust the database (eventually) if we trust the inference engines, but how do we know we can trust the inference engines?

Mathematical thinking cannot solve this problem, because the most likely kind of bug is not a bad inference algorithm but an incorrect implementation of a good one. Notice what has happened here, though; the verification problem for Omega no longer lives in the rarefied realm of pure mathematics but the more concrete province of software engineering.

As such, there are things that experience can teach us. We don’t know how to do perfect software engineering, but we do know what the best practices are. And this is the point at Dr. Wolfram’s proposal to build Omega on Mathematica and Wolfram Alpha begins to be troubling. These are amazing tools, but they’re closed source. They cannot be meaningfully audited for correctness by anyone outside Wolfram Research. Experience teaches us that this is a danger sign, a fragile single point of failure, and simply not tolerable in any project with the ambitions of Omega.

I think Dr. Wolfram is far too intelligent not to understand this, which makes his failure to address the issue the more troubling. For Omega to be trusted, the entire system will need to be transparent top to bottom. The design, the data representations, and the implementation code for its software must all be freely auditable by third-party mathematical topic experts and mathematically literate software engineers.

I would go so far as to say that any mathematician or software engineer asked to participate in this project is ethically required to insist on complete auditability and open source. Otherwise, what has the tradition of peer review and process transparency in science taught us?

I hope that Dr. Wolfram will address this issue in a future blog post. And I hope he understands that, for all his brilliance and impressive accomplishments, “Trust my secret code” will not – and cannot – be an answer that satisfies.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Friday, 12 Sep 2014 16:45

A Call To Duty (David Weber, Timothy Zahn; Baen Books) is a passable extension of Baen Book’s tent-pole Honorverse franchise. Though billed as by David Weber, it resembled almost all of Baen’s double-billed “collaborations” in that most of the actual writing was clearly done by the guy on the second line, with the first line there as a marketing hook.

Zahn has a bit of fun subverting at least one major trope of the subgenre; Travis Long is definitely not the kind of personality one expects as a protagonist. Otherwise all the usual ingredients are present in much the expected combinations. Teenager longing for structure in his life joins the Navy, goes to boot camp, struggles in his first assignment, has something special to contribute when the shit hits the fan. Also, space pirates!

Baen knows its business; there may not be much very original about this, but Honorverse fans will enjoy this book well enough. And for all its cliched quality, it’s more engaging that Zahn’s rather sour last outing, Soulminder, which I previously reviewed.

The knack for careful worldbuilding within a franchise’s canonical constraints that Zahn exhibited in his Star Wars tie-ins is deployed here, where details of the architecture of Honorverse warships become significant plot elements. Also we get a look at Manticore in its very early years, with some characters making the decisions that will grow it into the powerful star kingdom of Honor Harrington’s lifetime.

For these reason, if no others, Honorverse completists will want to read this one too.

Author: "Eric Raymond" Tags: "Review, Science Fiction"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Sep 2014 21:39

Infinite Science Fiction One (edited by Dany G. Zuwen and Joanna Jacksonl Infinite Acacia) starts out rather oddly, with Zuwen’s introducton in which, though he says he’s not religious, he connects his love of SF with having read the Bible as a child. The leap from faith narratives to a literature that celebrates rational knowability seems jarring and a bit implausible.

That said, the selection of stories here is not bad. Higher-profile editors have done worse, sometimes in anthologies I’ve reviewed.

Janka Hobbs’s Real is a dark, affecting little tale of a future in which people who don’t want the mess and bother of real children buy robotic child surrogates, and what happens when a grifter invents a novel scam.

Tim Majors’s By The Numbers is a less successful exploration of the idea of the quantified self – a failure, really, because it contains an impossible oracle-machine in what is clearly intended to be an SF story.

Elizabeth Bannon’s Tin Soul is a sort of counterpoint to Real in which a man’s anti-robot prejudices destroy his ability to relate to his prosthetically-equipped son.

P. Anthony Ramanauskas’s Six Minutes is a prison-break story told from the point of view of a monster, an immortal mind predator who steals the bodies of humans to maintain existence. It’s well written, but diminished by the author’s failure to actually end it and dangling references to a larger setting that we are never shown. Possibly a section from a larger work in progress?

John Walters’s Matchmaker works a familiar theme – the time traveler at a crisis, forbidden to interfere or form attachments – unfortunately, to no other effect than an emotional tone painting. Competent writing does not save it from becoming maudlin and trivial.

Nick Holburn’s The Wedding is a creepy tale of a wedding disrupted by an undead spouse. Not bad on its own terms, but I question what it’s doing in an SF anthology.

Jay Wilburn’s Slow is a gripping tale of an astronaut fighting off being consumed by a symbiote that has at least temporarily saved his life. Definitely SF; not for the squeamish.

Rebecca Ann Jordan’s Gospel Of is strange and gripping. An exile with a bomb strapped to her chest, a future spin on the sacrificed year-king, and a satisfying twist in the ending.

Dan Devine’s The Silent Dead is old-school in the best way – could have been an Astounding story in the 1950s. The mass suicide of a planetary colony has horrifying implications the reader may guess before the ending…

Matthew S. Dent’s Nothing Besides Remains carries forward another old-school tradition – a robot come to sentience yearning for its lost makers. No great surprises here, but a good exploration of the theme.

William Ledbetter’s The Night With Stars is very clever, a sort of anthropological reply to Larry Niven’s classic The Magic Goes Away. What if Stone-Age humans relied on elrctromagnetic features of their environment – and then, due to a shift in the geomagnetic field, lost them? Well done.

Doug Tidwell’s Butterflies is, alas, a textbook example of what not to do in an SF story. At best it’s a trivial finger exercise about an astronaut going mad. There’s no reveal anywhere, and it contradicts the actual facts of history without explanation; no astronaut did this during Kennedy’s term.

Michaele Jordan’s Message of War is a well-executed tale of weapons that can wipe a people from history, and how they might be used. Subtly horrifying even if we are supposed to think of the wielders as the good guys.

Liam Nicolas Pezzano’s Rolling By in the Moonlight starts well, but turns out to be all imagery with no point. The author has an English degree; that figures, this piece smells of literary status envy, a disease the anthology is otherwise largely and blessedly free of.

J.B. Rockwell’s Midnight also starts well and ends badly. An AI on a terminally damaged warship struggling to get its cryopreserved crew launched to somewhere they might live again, that’s a good premise. Too bad it’s wasted on empty sentimentality about cute robots.

This anthology is only about 50% good, but the good stuff is quite original and the less good is mostly just defective SF rather than being anti-SF infected with literary status envy. On balance, better value than some higher-profile anthologies with more pretensions.

Author: "Eric Raymond" Tags: "General"
Send by mail Print  Save  Delicious 
Date: Thursday, 11 Sep 2014 09:11

Collision of Empires (Prit Buttar; Osprey Publishing) is a clear and accessible history that attempts to address a common lack in accounts of the Great War that began a century ago this year: they tend to be centered on the Western Front and the staggering meat-grinder that static trench warfare became as outmoded tactics collided with the reality of machine guns and indirect-fire artillery.

Concentration on the Western Front is understandable in the U.S. and England; the successor states of the Western Front’s victors have maintained good records, and nationals of the English-speaking countries were directly involved there. But in many ways the Eastern Front story is more interesting, especially in the first year that Buttar chooses to cover – less static, and with a sometimes bewilderingly varied cast. And, arguably, larger consequences. The war in the east eventually destroyed three empires and put Lenin’s Communists in power in Russia.

Prit Buttar does a really admirable job of illuminating the thinking of the German, Austrian, and Russian leadership in the run-up to the war – not just at the diplomatic level but in the ways that their militaries were struggling to come to grips with the implications of new technology. The extensive discussion of internecine disputes over military doctrine in the three officer corps involved is better than anything similar I’ve seen elsewhere.

Alas, the author’s gift for lucid exposition falters a bit when it comes to describing actual battles. Ted Raicer did a better job of this in 2010’s Crowns In The Gutter, supported by a lot of rather fine-grained movement maps. Without these, Buttar’s narrative tends to bog down in a confusing mess of similar unit designations and vaguely comic-operatic Russo-German names.

Still, the effort to follow it is worthwhile. Buttar is very clear on the ways that flawed leadership, confused objectives and wishful thinking on all sides engendered a war in which there could be no clear-cut victory short of the utter exhaustion and collapse of one of the alliances.

On the Eastern Front, as on the Western, soldiers fought with remarkable courage for generals and politicians who – even on the victorious side – seriously failed them.

Author: "Eric Raymond" Tags: "Review"
Send by mail Print  Save  Delicious 
Date: Monday, 08 Sep 2014 04:06

The Abyss Beyond Dreams (Peter F. Hamilton, Random House/Del Rey) is a sequel set in the author’s Commonwealth universe, which earlier included one duology (Pandora’s Star, Judas Unchained) and a trilogy (The Dreaming Void, The The Temporal Void, The Evolutionary Void). It brings back one of the major characters (the scientist/leader Nigel Sheldon) on a mission to discover the true nature of the Void at the heart of the Galaxy.

The Void is a pocket universe which threatens to enter an expansion phase that would destroy everything. It is a gigantic artifact of some kind, but neither its builders nor purpose are known. Castaway cultures of humans live inside it, gifted with psionic powers in life and harvested by the enigmatic Skylords in death. And Nigel Sheldon wants to know why.

This is space opera and planetary romance pulled off with almost Hamilton’s usual flair. I say “almost” because the opening sequence, though action-packed, comes off as curiously listless. Nigel Sheldon’s appearance rescues the show, and we are shortly afterwards pitched into an entertaining tale of courage and revolution on a Void world. But things are not as they seem, and the revolutionaries are being manipulated for purposes they cannot guess…

The strongest parts of this book show off Hamilton’s worldbuilding imagination and knack for the telling detail. Yes, we get some insight into what the Void actually is, and an astute reader can guess more. But the final reveal will await the second book of this duology.

Author: "Eric Raymond" Tags: "Review, Science Fiction"
Send by mail Print  Save  Delicious 
Date: Sunday, 07 Sep 2014 20:35

Sometimes reading code is really difficult, even when it’s good code. I have a challenge for all you hackers out there…

cvs-fast-export translates CVS repositories into a git-fast-export stream. It does a remarkably good job, considering that (a) the problem is hard and grotty, with weird edge cases, and (b) the codebase is small and written in C, which is not the optimal language for this sort of thing.

It does a remarkably good job because Keith Packard wrote most of it, and Keith is a brilliant systems hacker (he codesigned X and wrote large parts of it). I wrote most of the parts Keith didn’t, and while I like to think my contribution is solid it doesn’t approach his in algorithmic density.

Algorithmic density has a downside. There are significant parts of Keith’s code I don’t understand. Sadly, Keith no longer understands them either. This is a problem, because there are a bunch of individually small issues which (I think) add up to: the core code needs work. Right now, neither I nor anyone else has the knowledge required to do that work.

I’ve just spent most of a week trying to acquire and document that knowledge. The result is a file called “hacking.asc” in the cvs-fast-export repository. It documents what I’ve been able to figure out about the code. It also lists unanswered questions. But it is incomplete.

It won’t be complete until someone can read it and know how to intelligently modify the heart of the program – a function called rev_list_merge() that does the hard part of merging cliques of CVS per-file commits into a changeset DAG.

The good news is that I’ve managed to figure out and document almost everything else. A week ago, the code for analyzing CVS masters into in-core data objects was trackless jungle. Now, pretty much any reasonably competent C systems programmer could read hacking.txt and the comments and grasp what’s going on.

More remains to be done, though, and I’ve hit a wall. The problem needs a fresh perspective, ideally more than one. Accordingly, I’m requesting help. If you want a real challenge in comprehending C code written by a master programmer – a work of genius, seriously – dive in.

https://gitorious.org/cvs-fast-export/

There’s the repository link. Get the code; it’s not huge, only 10KLOC, but it’s fiendishly clever. Read it. See what you can figure out that isn’t already documented. Discuss it with me. I guarantee you’ll find it an impressive learning experience – I have, and I’ve been writing C for 30 years.

This challenge is recommended for intermediate to advanced C systems programmers, especially those with an interest in the technicalia of version-control systems.

Author: "Eric Raymond" Tags: "Software"
Send by mail Print  Save  Delicious 
Date: Thursday, 04 Sep 2014 00:50

Yesterday I shipped cvs-fast-export 1.15, with a significant performance improvement produced by replacing a naive O(n**3) sort with a properly tuned O(n log n) version.

In ensuing discussion on G+, one of my followers there asked if I thought this was likely to produce a real performance improvement, as in small inputs the constant setup time of a cleverly tuned algorithm often dominates the nominal savings.

This is one of those cases where an intelligent question elicits knowledge you didn’t know you had. I discovered that I do believe strongly that cvs-fast-export’s workload is dominated by large repositories. The reason is a kind of adverse selection phenomenon that I think is very general to old technologies with high exit costs.

The rest of this blog post will use CVS as an example of the phenomenon, and may thus be of interest even to people who don’t specifically care about high version control systems.

Cast your mind back to the point at which CVS was definitely superseded by better VCS designs. It doesn’t matter for this discussion exactly when that point was, but you can place it somewhere between 2000 and 2004 based on when you think Subversion went from a beta program to a production tool.

At that point there were lots of CVS repositories around, greatly varying in size and complexity. Some were small and simple, some large and ugly. By “ugly” I mean full of Things That Should Not Be – tags not corresponding to coherent changesets, partially merged import branches, deleted files for which the masters recording older versions had been “cleaned up”, and various other artifacts that would later cause severe headaches for anyone trying to convert the repositories to a modern VCS.

In general, size and ugliness correlated well with project age. There are exceptions, however. When I converted the groff repository from CVS to git I was braced for an ordeal; groff is quite an old project. But the maintainer and his devs had been, it turned out very careful and disciplined and comitted none of the sloppinesses that commonly lead to nasty artifacts.

So, at the point that people started to look seriously at moving off CVS, there was a large range of CVS repo sizes out there, with difficulty and fidelity of up-conversion roughly correlated to size and age.

The result was that small projects (and well-disciplined larger projects resembling groff) converted out early. The surviving population of CVS repositories became, on average, larger and gnarlier. After ten years of adverse selection, the CVS repositories we now have left in the wild tend to be the very largest and grottiest kind, usually associated with projects of venerable age.

GNUPLOT and various BSD Unixes stand out as examples. We have now, I think, reached the point where the remaining CVS conversions are in general huge, nasty projects that will require heroic effort with even carefully tuned and optimized tools. This is not a regime in which the constant startup cost of an optimized sort is going to dominate.

At the limit, there may be some repositories that never get converted because the concentrated pain associated with doing that overwhelms any time-discounted estimate of the costs of using obsolescent tools – or even the best tools may not be good enough to handle their sheer bulk. Emacs was almost there. There are hints that some of the BSD Unix repositories may be there already – I know of failed attempts, and tried to assist one such failure.

I think you can see this kind of adverse selection effect in survivals of a lot of obsolete technology. Naval architecture is one non-computing field where it’s particularly obvious. Surviving obsolescent ships tend to be large and ugly rather than small and ugly, because the capital requirement to replace the big ones is harder to swallow.

Has anyone coined a name for this phenomenon? Maybe we ought to.

Author: "Eric Raymond" Tags: "Software, version-control"
Send by mail Print  Save  Delicious 
Date: Wednesday, 03 Sep 2014 05:35

Better Identification of Viking Corpses Reveals: Half of the Warriors Were Female insists an article at tor.com. It’s complete bullshit.

What you find when you read the linked article is an obvious, though as it turns out a superficial problem. The linked research doesn’t say what the article claims. What it establishes is that a hair less than half of Viking migrants were female, which is no surprise to anyone who’s been paying attention. The leap from that to “half the warriors were female” is unjustified and quite large.

There’s a deeper problem the article is trying to ignore or gaslight out of existence: reality is, at least where pre-gunpowder weapons are involved, viciously sexist.

It happens that I know a whole lot from direct experience about fighting and training with contact weapons – knives, swords, and polearms in particular. I do this for fun, and I do it in training environments that include women among the fighters.

I also know a good deal about Viking archeology – and my wife, an expert on Viking and late Iron Age costume who corresponds on equal terms with specialist historians, may know more than I do. (Persons new to the blog might wish to read my review of William Short’s Viking Weapons and Combat.) We’ve both read saga literature. We both have more than a passing acquaintance with the archeological and other evidence from other cultures historically reported to field women in combat, such as the Scythians, and have discussed it in depth.

And I’m calling bullshit. Males have, on average, about a 150% advantage in upper-body strength over females. It takes an exceptionally strong woman to match the ability of even the average man to move a contact weapon with power and speed and precise control. At equivalent levels of training, with the weight of real weapons rather than boffers, that strength advantage will almost always tell.

Supporting this, there is only very scant archeological evidence for female warriors (burials with weapons). There is almost no such evidence from Viking cultures, and what little we have is disputed; the Scythians and earlier Germanics from the Migration period have substantially more burials that might have been warrior women. Tellingly, they are almost always archers.

I’m excluding personal daggers for self-defense here and speaking of the battlefield contact weapons that go with the shieldmaidens of myth and legend. I also acknowledge that a very few exceptionally able women can fight on equal terms with men. My circle of friends contains several such exceptional women; alas, this tells us nothing about woman as a class but much about how I select my friends.

But it is a very few. And if a pre-industrial culture has chosen to train more than a tiny fraction of its women as shieldmaidens, it would have lost out to a culture that protected and used their reproductive capacity to birth more male warriors. Brynhilde may be a sexy idea, but she’s a bioenergetic gamble that is near certain to be a net waste.

Firearms changes all this, of course – some of the physiological differences that make them inferior with contact weapons are actual advantages at shooting (again I speak from experience, as I teach women to shoot). So much so that anyone who wants to suppress personal firearams is objectively anti-female and automatically oppressive of women.

Author: "Eric Raymond" Tags: "Martial Arts, Science"
Send by mail Print  Save  Delicious 
Date: Thursday, 28 Aug 2014 12:31

Our new cat Zola, it appears, has a mysterious past. The computer that knows about the ID chip embedded under his skin thinks he’s a dog.

There’s more to the story. And it makes us think we may have misread Zola’s initial behavior. I’m torn between wishing he could tell us what he’d been through, and maybe being thankful that he can’t. Because if he could, I suspect I might experience an urge to go punch someone’s lights out that would be bad for my karma.

On Zola’s first vet visit, one of the techs did a routine check and discovered that Zola had had an ID chip implanted under his skin. This confirmed our suspicion that he’d been raised by humans rather than being feral or semi-feral. Carol, our contact at PALS (the rescue network we got Zola from) put some more effort into trying to trace his background.

We already knew that PALS rescued Zola from an ASPCA shelter in Cumberland County, New Jersey, just before he would have been euthanized. Further inquiry disclosed that (a) he’d been dumped at the shelter by a human, and (b) he was, in Carol’s words, “alarmingly skinny” – they had to feed him up to a normal weight.

The PALS people didn’t know he was chipped. When we queried Home Again, the chip-tracking outfit, the record for the chip turned out to record the carrier as a dog. The staffer my wife Cathy spoke with at Home Again thought that was distinctly odd. This is not, apparently, a common sort of confusion.

My wife subsequently asked Home Again to contact the person or family who had Zola chipped and request that the record be altered to point to us. (This is a routine procedure for them when an animal changes owners.)

We got a reply informing us that permission for the transfer was refused.

These facts indicate to us that somewhere out there, there is someone who (a) got Zola as a kitten, (b) apparently failed to feed him properly, (c) dumped him at a shelter, and now (d) won’t allow the chip record to be changed to point to his new home.

This does not add up to a happy picture of Zola’s kittenhood. It is causing us to reconsider how we evaluated his behavior when we first met him. We thought he was placid and dignified – friendly but a little reserved.

Now we wonder – because he isn’t “placid” any more. He scampers around in high spirits. He’s very affectionate, even a bit needy sometimes. (He’s started to lick our hands occasionally during play.) Did we misunderstand? Was his reserve a learned fear of mistreatment? We don’t know for sure, but it has become to seem uncomfortably plausible.

There’s never any good reason for mistreating a cat, but it seems like an especially nasty possibility when the cat is as sweet-natured and human-friendly as Zola is. He’s not quite the extraordinarily loving creature Sugar was, but his Coon genes are telling. He thrives on affection and returns it more generously every week.

I don’t know if we’ll ever find out anything more. Nobody at PALs or Home Again or our vet has a plausible theory about why Zola is carrying an ID chip registered to a dog, nor why his former owners owners won’t OK a transfer.

We’re just glad he’s here.

Author: "Eric Raymond" Tags: "General, Zola"
Send by mail Print  Save  Delicious 
Date: Wednesday, 27 Aug 2014 08:58

I just had a rather hair-raising experience with a phase-of-moon-dependent bug.

I released GPSD 3.11 this last Saturday (three days ago) to meet a deadline for a Debian freeze. Code tested ninety-six different ways, run through four different static analyzers, the whole works. Because it was a hurried release I deliberately deferred a bunch of cleanups and feature additions in my queue. Got it out on time and it’s pretty much all good – we’ve since turned up two minor build failures in two unusual feature-switch cases, and one problem with the NTP interface code that won’t affect reasonable hardware.

I’ve been having an extremely productive time since chewing through all the stuff I had deferred. New features for gpsmon, improvements for GPSes watching GLONASS birds, a nice space optimization for embedded systems, some code to prevent certain false-match cases in structured AIS Type 6 and Type 8 messages, merging some Android port tweaks, a righteous featurectomy or two. Good clean fun – and of course I was running my regression tests frequently and noting when I’d done so in my change comments.

Everything was going swimmingly until about two hours ago. Then, as I was verifying a perfectly innocent-appearing tweak to the SiRF-binary driver, the regression tests went horribly, horribly wrong. Not just the SiRF binary testloads, all of them.

My friends, do you know what it looks like when the glibc detects a buffer overflow at runtime? Pages and pages of hex garble, utterly incomprehensible and a big flare-lit clue that something bad done happened.

“Yoicks!” I muttered, and backed out the latest change. Ran “scons check” again. Kaboom! Same garble. Wait – I’d run regressions successfully on that revision just a few minutes previously, or so I thought.

Don’t panic. Back up to the last revision were the change comment includes the reassuring line “All regression tests passed.” Rebuild. “scons check”. Aaaand…kaboom!

Oh shit oh dear. Now I have real trouble. That buffer overflow has apparently been lurking in ambush for some time, with regression tests passing despite it because the phase of the moon was wrong or something.

The first thing you do in this situation is try to bound the damage and hope it didn’t ship in the last release. I dropped back to the release 3.11 revision, rebuilt and tested. No kaboom. Phew!

These are the times when git bisect is your friend. Five test runs later I found the killer commit – a place where I had tried recovering from bad file descriptor errors in the daemon’s main select call (which can happen if an attached GPS dies under pessimal circumstances) and garbage-collecting the storage for the lost devices.

Once I had the right commit it was not hard to zero in on the code that triggered the problem. By inspection, the problem had to be in a particular 6-line loop that was the meat of the commit. I checked out the head version and experimentally conditioned out parts of it until I had the kaboom isolated to one line.

It was a subtle – and entirely typical – sort of systems-programming bug. The garbage-collection code iterated over the array of attached devices conditionally freeing them. What I forgot when I coded this was that that sort of operation is only safe on device-array slots that are currently allocated and thus contain live data. The test operation on a dead slot – an FD_ISSET() – was the kaboomer.

The bug was random because the pattern of stale data in the dead slots was not predictable. It had to be just right for the kaboom to happen. The kaboom didn’t happen for nearly three days, during which I am certain I ran the regression tests well over 20 times a day. (Wise programmers pay attention to making their test suites fast, so they can be run often without interrupting concentration.)

It cannot be said too often: version control is your friend. Fast version control is damn near your best friend, with the possible exception of a fast and complete test suite. Without these things, fixing this one could have ballooned from 45 minutes of oh-shit-oh-dear to a week – possibly more – of ulcer-generating agony.

Version control is maybe old news, but lots of developers still don’t invest as much effort on their test suites as they should. I’m here to confirm that it makes programming a hell of a lot less hassle when you build your tests in parallel with your code, do the work to make them cover well and run fast, then run them often. GPSD has about 100 tests; they run in just under 2 minutes, and I run them at least three or four times an hour.

This puts out little fires before they become big ones. It means I get to spend less time debugging and more time doing fun stuff like architecture and features. The time I spent on them has been multiply repaid. Go and do thou likewise.

Author: "Eric Raymond" Tags: "Software, GPSD"
Send by mail Print  Save  Delicious 
Date: Wednesday, 27 Aug 2014 00:22

The newest addition to Rootless Root:

On one occasion, as Master Foo was traveling to a conference with a few of his senior disciples, he was accosted by a hardware designer.

The hardware designer said: “It is rumored that you are a great programmer. How many lines of code do you write per year?”

Master Foo replied with a question: “How many square inches of silicon  do you lay out per year?”

“Why…we hardware designers never measure our work in that way,” the man said.

“And why not?” Master Foo inquired.

“If we did so,” the hardware designer replied, “we would be tempted to design chips so large that they cannot be fabricated – and, if they were fabricated, their overwhelming complexity would make it be impossible to generate proper test vectors for them.”

Master Foo smiled, and bowed to the hardware designer.

In that moment, the hardware designer achieved enlightenment.

Author: "Eric Raymond" Tags: "Hacker Culture, Software"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader