The Ring Of Fire books are a mixed bag. Sharecropped by many authors, ringmastered by Eric Flint, they range from plodding historical soap opera to sharp, clever entertainments full of crunchy geeky goodness for aficionados of military and technological history.
When Flint’s name is on the book you can generally expect the good stuff. So it proves in the latest outing, 1636: Commander Cantrell in the West Indies, a fun ride that’s (among other things) an affectionate tribute to C.S. Forester’s Hornblower novels and age-of-sail adventure fiction in general. (Scrupulously I note that I’m personally friendly with Flint, but this is exactly because he’s good at writing books I like.)
It is 1636 in the shook-up timeline birthed by the town of Grantville’s translocation to the Thuringia of 1632. Eddie Cantrell is a former teenage D&D player from uptime who became a peg-legged hero of the Baltic War and then husband of the not-quite-princess Ann-Catherine of Denmark. Now the United States of Europe is sending him to the Caribbean with an expeditionary force, Flotilla X-Ray, to seize the island of Trinidad from the Spanish and harvest oil desperately needed by Grantville’s industry.
But it’s not a simple military mission. There are tensions among the factions in the allied fleet – the United States of Europe, the Danes, the Dutch, and a breakaway Spanish faction in the Netherlands. And the Wild Geese – exiled Irish mercenaries under the charismatic Earl Tyrconnell – have their own agenda. Cardinal Richelieu’s agents are maneuvering against the whole enterprise. And as the game opens, nobody in the fleet knows about the desperate, hidden Dutch refugee colony on Eustatia…
If the book has a fault, it’s that authors Flint and Gannon love their intricate wheels-within-wheels plotting and elaborate political intrigue a little bit too much. It’s fun to watch those gears turning for a while, but even readers who (like me) relish that sort of thing may find themselves getting impatient for stuff to start blowing up already by thirty chapters in.
No fear, we do get our rousing sea battles. With novel twists, because the mix of Grantville’s uptime technology with the native techniques of the 1600s takes tactics in some strange directions. I particularly chuckled at the descriptions of captive hot-air balloons being used as ship-launched observation platforms, a workable expedient never tried in our history. As usual, Flint (a former master machinist) writes with a keen sense of how applied technology works – and, too often, fails.
If some of the character developments and romantic pairings are maybe a little too easy to see coming, well, nobody reads fiction like this for psychological depth or surprise. It’s a solid installment in the ongoing series. Oh, and with pirates too. Arrr. I’ll read the next one.
I’m not, in general, a fan of David Drake’s writing; most of his output is grimmer and far more carnographic than I care to deal with. I’ve made an exception for his RCN series because they tickle my fondness for classic Age-of-Sail adventure fiction and its pastiches, exhibiting Drake’s strengths (in particular, his deep knowledge of history) while dialing back on the cruelty and gore.
Drake’s sources are no mystery to anyone who has read Patrick O’Brien’s Aubrey-Maturin series; Daniel Leary and his companion-in-arms Adele Mundy are obvious takes on the bumptious Jack Aubrey and physician/naturalist/spy Stephen Maturin. Drake expends great ingenuity in justifying a near-clone of the Napoleonic-era British Navy in a far future with FTL drives. And to his credit, the technological and social setting is better realized than in most exercises of this kind. It compares well in that respect to, for example, David Weber’s Honor Harrington sequence.
The early books in the RCN series, accordingly, seemed fresh and inventive. Alas, in this tenth installment the series is losing its wind. We’ve already seen a couple of variations of the plot; Daniel and Adele traipse off in the Princess Cecile on a sort-out-the-wogs mission backed by Cinnabar’s spooks. In a wry nod to another genre trope, they’re looking for buried treasure.
The worldbuilding remains pretty good, and provided most of the few really good moments in this novel. Alas, as the action ground on I found the characters’ all-too-familiar tics wearing on me – Adele’s nihilistic self-loathing, Daniel’s cheerful shallow bloodymindeness, Hogg’s bumpkin shtick, Miranda the ever-perfect girlfriend. The cardboard NPCs seem flatter than ever. The series always had strong elements of formula, but now Drake mostly seems to be just repeating himself. Even the battle scenes are rather perfunctory.
This is not a book that will draw in people who aren’t fans of its prequels. I’ll read the next one, but if it isn’t dramatically improved I’m done. Perhaps Drake is tiring of the premises; it may be time for him to bring things to a suitably dramatic close.
Nina D’Aleo’s The White List (Momentum Books) is a strange combination of success and failure. The premise is preposterous, the plotting is perfunctory – but the prose is zippy and entertaining and the characters acutely observed.
Genetic superhumans walk among us, most unaware that they have the ‘Shaman’ trait. A few awaken to manifest their powers, usually in violently destructive ways. Silvia Denaglia (code name: Silver!) is an operative for a super-secret agency that exists to capture and suppress them. But she has increasing doubts about the agency – its methods seem callous and its operatives careless of human life.
Of course there’s a conspiracy within a conspiracy, and the agency is tainted by evil, and there’s a rebel mutant good-guy underground, and her contact with it is the enigmatic man of her dreams. To call the worldbuilding cardboard would be an insult to honest cardboard, and anyone even marginally genre-savvy can see each breathless reveal in the plot coming from miles away. On these levels the book is dumb, dumber, dumbest – really embarrassingly bad.
And yet, it’s oddly charming. The prose is energetic and well-constructed. The characters work even though they’re trapped by the tropes they’re assigned to. There’s a good deal of wry comedy and quite a number of laugh-out-loud lines, especially in the earlier parts of the book. Ms. D’Aleo is not beyond hope; in fact I’d say she’s one half of a terrific writer. She would benefit from collaborating with somebody who knows how to do setting and plot but lacks her gift for the microlevel of writing.
Finally, a warning: This is one of those dishonestly-packaged books that is volume one of a series without being so labeled, and ends unresolved.
This is an update for friends of Sugar only; you know who you are.
Sugar may not have much time left. She’s been losing weight rapidly the last couple weeks, her appetite is intermittent, and she’s been having nausea episodes. She seems remarkably cheerful under the circumstances and still likes human company as much as ever, but … she really does seem old and frail now, which wasn’t so as recently as her 21st birthday in early February.
We’re bracing ourselves. If the rate she’s fading doesn’t change I think we’re going to have to euthanize her within six weeks or so. Possibly sooner. Possibly a lot sooner.
Sugar’s had a good long run. We’ll miss her a lot, but Cathy and I are both clear that it is our duty not to see her suffering prolonged unnecessarily.
If you’re a friend of Sugar and have any option to visit here to say your farewells, best do it immediately.
If you’ve somehow read this far without having met Sugar: I don’t normally blog about strictly personal things, but Sugar is a bit of an institution. She’s been a friend to the many hackers who have guested in our storied basement; I’ve seen her innocent joyfulness light up a lot of faces. We’re not the only people who will be affected by losing her.
The Heartbleed bug made the Washington Post. And that means it’s time for the reminder about things seen versus things unseen that I have to re-issue every couple of years.
Actually, this time around I answered it in advance, in an Ask Me Anything on Slashdot just about exactly a month ago. The following is a lightly edited and somewhat expanded version of that answer.
I actually chuckled when I read rumor that the few anti-open-source advocates still standing were crowing about the Hearbeat bug, because I’ve seen this movie before after every serious security flap in an open-source tool. The script, which includes a bunch of people indignantly exclaiming that many-eyeballs is useless because bug X lurked in a dusty corner for Y months, is so predictable that I can anticipate a lot of the lines.
The mistake being made here is a classic example of Frederic Bastiat’s “things seen versus things unseen”. Critics of Linus’s Law overweight the bug they can see and underweight the high probability that equivalently positioned closed-source security flaws they can’t see are actually far worse, just so far undiscovered.
That’s how it seems to go whenever we get a hint of the defect rate inside closed-source blobs, anyway. As a very pertinent example, in the last couple months I’ve learned some things about the security-defect density in proprietary firmware on residential and small business Internet routers that would absolutely curl your hair. It’s far, far worse than most people understand out there.
Friends don’t let friends run factory firmware. You really do not want to be relying on anything less audited than OpenWRT or one of its kindred (DDWRT, or CeroWRT for the bleeding edge). And yet the next time any security flaw turns up in one of those open-source projects, we’ll see a replay of the movie with yet another round of squawking about open source not working.
Ironically enough this will happen precisely because the open-source process is working … while, elsewhere, bugs that are far worse lurk in closed-source router firmware. Things seen vs. things unseen…
Returning to Heartbleed, one thing conspicuously missing from the downshouting against OpenSSL is any pointer to a closed-source implementation that is known to have a lower defect rate over time. This is for the very good reason that no such empirically-better implementation is known to exist. What is the defect history on proprietary SSL/TLS blobs out there? We don’t know; the vendors aren’t saying. And we can’t even estimate the quality of their code, because we can’t audit it.
The response to the Heartbleed bug illustrates another huge advantage of open source: how rapidly we can push fixes. The repair for my Linux systems was a push-one-button fix less than two days after the bug hit the news. Proprietary-software customers will be lucky to see a fix within two months, and all too many of them will never see a fix patch.
The reason for this is that the business models for closed-source software pretty much require software updates to be an expensive, high-friction process hedged about with fees, approval requirements, and legal restrictions. Not like open-source-land, where we can ship a fix minutes after it’s compiled and tested because nobody is trying to collect rent on it.
Sunlight remains the best disinfectant. Open source is no guarantee of perfect results, but every controlled comparison that has been tried has shown that closed source is generally worse.
Finally and in 2014 perhaps most tellingly…if the source of the code you rely on is closed, how do you know your vendor hasn’t colluded with some spy shop to install a back door?
Some weeks ago I was tremendously amused by a report of an exchange in which a self-righteous vegetarian/vegan was attempting to berate somebody else for enjoying Kentucky Fried Chicken. I shall transcribe the exchange here:
>There is nothing sweet or savory about the rotting >carcass of a chicken twisted and crushed with cruelty. >There is nothing delicious about bloodmouth carnist food. >How does it feel knowing your stomach is a graveyard I'm sorry, but you just inadvertently wrote the most METAL description of eating a chicken sandwich in the history of mankind. MY STOMACH IS A GRAVEYARD NO LIVING BEING CAN QUENCH MY BLOODTHIRST I SWALLOW MY ENEMIES WHOLE ESPECIALLY IF THEY'RE KENTUCKY FRIED
I am no fan of KFC, I find it nasty and overprocessed. However, I found the vegan rant richly deserving of further mockery, especially after I did a little research and discovered that the words “bloodmouth” and “carnist” are verbal tokens for an entire ideology.
First thing I did was notify my friend Ken Burnside, who runs a T-shirt business, that I want a “bloodmouth carnist” T-shirt – a Spinal-Tap-esque parody of every stupid trash-metal tour shirt ever printed. With flaming skulls! And demonic bat-wings! And umlauts! Definitely umlauts.
Once Ken managed to stop laughing we started designing. Several iterations. a phone call, and a flurry of G+ messages later, we had the Bloodmouth Carnist T-shirt. Order yours today!
By the way, the skull on that shirt is me, sort of. Ken asked me to supply a photo reference, so my wife and I went to a steakhouse and she snapped a picture of me grinning maniacally over a slab of prime rib. For SCIENCE!
This had consequences. An A&D regular challenged me in private mail to explain why my consequentialist ethics don’t require me to be a vegetarian.
I broadly agree with Sam Harris’s position in The Moral Landscape that the ground of ethics has to be the minimization of pain. But I add to this that for pain to be of consequence to me it needs to be have an experiencer who is at least potentially part of a community of reciprocal trust with me. Otherwise I would be necessarily paralyzed by guilt at killing bacteria every time I breathe.
The community of (potential) reciprocal trust includes all humans, possibly excepting a tiny minority of the criminally insane. It presumptively includes extraterrestrial sophonts, if we ever discover those. I think it is prudent and conservative (in the best sense of that term) to include borderline and near-borderline sophonts like higher primates, elephants, whales, dolphins, and squid. In principle it includes any animal that can solve the other-minds problem – which probably includes some of the brighter birds. I think this category can be roughly delimited using the mirror test.
For different reasons, the community of trust includes non-sophont human commensals. My cat, Sugar, for example, who shows only dim and occasional flashes of behavior that might indicate she models other minds, but has a strong mutual-trust relationship with my wife and myself. We know what to expect of each other; we like each other. This is a kind of reciprocity with ethical significance even though the cat is not sophont.
Another way to put this is to remember the Golden Rule, “Do as you would be done by” and ask: what animals have the ability to follow it, the right kind of informational complexity required to support it?
Cows, pigs, chickens, and fish are not part of my potential community of trust. They don’t have minds capable of it – the informational complexity required doesn’t seem to be there at all (though suspicions have occasionally been raised about pigs; I’ll revisit this point). Thus, their deaths are not intrinsically ethically significant to me, any more than harvesting a head of lettuce is.
Cruelty is a different matter. I think we ought not engage in cruelty because it is damaging and coarsening; people who make a habit of being cruel to non-sophonts are more likely to become cruel and dangerous to sophonts as well. Thus, merely killing a food animal is ethically neutral, but careless cruelty towards one is wrong and deliberate cruelty is evil.
(Nevertheless, I report that the above vegan rant inculcated in me a desire to stomp into a roomful of vegans and demand my food “twisted and crushed with cruelty”. I really don’t like it when people try to jerk me around by my sensibilities as though I’m some kind of idiot who is unreachable by reasoned argument. I find it insulting and want to punch back.)
These criteria could interact in interesting ways, and there are edge cases that need more investigation. I think I would have to stop eating pork if pigs could count the way that (for example) crows can – some pigs reportedly come close enough to passing the mirror test to worry me a little. I can readily imagine that pigs bred for intelligence might come near enough to sophont to be taboo to me. On the other hand, a friend who grew up on a hog farm assures me that pigs bred for meat are stone-stupid; according to her, it’s only wild pigs I should be even marginally concerned about.
Otters are another interesting case; they seem very playful and intelligent in the wild, occasionally use tools, and can form affectionate bonds with humans. I’d very much like to see them mirror-tested; in the meantime I’m quite willing to to give them the benefit of the doubt and consider them taboo for killing and eating.
There you have it. The bloodmouth carnist theory of animal rights. Now if you’ll excuse me I’m going to go have a roast beef sandwich for lunch.
When I heard that Brendan Eich had been forced to resign his new job as CEO at Mozilla, my first thought was “Congratulations, gay activists. You have become the bullies you hate.”
On reflection, I think the appalling display of political thuggery we’ve just witnessed demands a more muscular response. Eich was forced out for donating $1000 to an anti-gay-marriage initiative? Then I think it is now the duty of every friend of free speech and every enemy of political bullying to pledge not only to donate $1000 to the next anti-gay-marriage initiative to come along, but to say publicly that they have done so as a protest against bullying.
This is my statement that I am doing so. I hope others will join me.
It is irrelevant whether we approve of gay marriage or not. The point here is that bullying must have consequences that deter the bullies, or we will get more of it. We must let these thugs know that they have sown dragon’s teeth, defeating themselves. Only in this way can we head off future abuses of similar kind.
And while I’m at it – shame on you, Mozilla, for knuckling under. I’ll switch to Chrome over this, if it’s not totally unusable.
A note from the publisher says Jeremy Rifkin himself asked them to ship me a copy of his latest book, The Zero Marginal Cost Society. It’s obvious why: in writing about the economics of open-source software, he thinks I provided one of the paradigmatic cases of what he wants to write about – the displacement of markets in scarce goods by zero-marginal-cost production. Rifkin’s book is an extended argument that this is is a rising trend which will soon obsolesce not just capitalism as we have known it, but many forms of private property as well.
Alas for Mr. Rifkin, my analysis of how zero-marginal-cost reproduction transforms the economics of software also informs me why that logic doesn’t obtain for almost any other kind of good – why, in fact, his general thesis is utterly ridiculous. But plain common sense refutes it just as well.
Here is basic production economics: the cost of a good can be divided into two parts. The first is the setup cost – the cost of assembling the people and tools to make the first copy. The second is the incremental – or, in a slight abuse of terminology, “marginal” – cost of producing unit N+1 after you have produced the first copy.
In a free market, normal competitive pressure pushes the price of a good towards its marginal cost. It doesn’t get there immediately, because manufacturers need to recoup their setup costs. It can’t stay below marginal cost, because if it did that the manufacturer loses money on every sale and the business crashes.
In this book, Rifkin is fascinated by the phenomenon of goods for which the marginal cost of production is zero, or so close to zero that it can be ignored. All of the present-day examples of these he points at are information goods – software, music, visual art, novels. He joins this to the overarching obsession of all his books, which are variations on a theme of “Let us write an epitaph for capitalism”.
In doing so, Rifkin effectively ignores what capitalists do and what capitalism actually is. “Capital” is wealth paying for setup costs. Even for pure information goods those costs can be quite high. Music is a good example; it has zero marginal cost to reproduce, but the first copy is expensive. Musicians must own costly instruments, be paid to perform, and require other capital goods such as recording studios. If those setup costs are not reliably priced into the final good, production of music will not remain economically viable.
Fifteen years ago I pointed out in my paper The Magic Cauldron that the pricing models for most proprietary software are economically insane. If you price software as though it were (say) consumer electronics, you either have to stiff your customers or go broke, because the fixed lump of money from each unit sale will always be overrun by the perpetually-rising costs of technical support, fixes, and upgrades.
I said “most” because there are some kinds of software products that are short-lived and have next to no service requirements; computer games are the obvious case. But if you follow out the logic, the sane thing to do for almost any other kind of software usually turns out to be to give away the product and sell support contracts. I was arguing this because it knocks most of the economic props out from under software secrecy. If you can sell support contracts at all, your ability to do so is very little affected by whether the product is open-source or closed – and there are substantial advantages to being open.
Rifkin cites me in his book, but it is evident that he almost completely misunderstood my arguments in two different ways, both of which bear on the premises of his book.
First, software has a marginal cost of production that is effectively zero, but that’s true of all software rather than just open source. What makes open source economically viable is the strength of secondary markets in support and related services. Most other kinds of information goods don’t have these. Thus, the economics favoring open source in software are not universal even in pure information goods.
Second, even in software – with those strong secondary markets – open-source development relies on the capital goods of software production being cheap. When computers were expensive, the economics of mass industrialization and its centralized management structures ruled them. Rifkin acknowledges that this is true of a wide variety of goods, but never actually grapples with the question of how to pull capital costs of those other goods down to the point where they no longer dominate marginal costs.
There are two other, much larger, holes below the waterline of Rifkin’s thesis. One is that atoms are heavy. The other is that human attention doesn’t get cheaper as you buy more of it. In fact, the opposite tends to be true – which is exactly why capitalists can make a lot of money by substituting capital goods for labor.
These are very stubborn cost drivers. They’re the reason Rifkin’s breathless hopes for 3-D printing will not be fulfilled. Because 3-D printers require feedstock, the marginal cost of producing goods with them has a floor well above zero. That ABS plastic, or whatever, has to be produced. Then it has to be moved to where the printer is. Then somebody has to operate the printer. Then the finished good has to be moved to the point of use. None of these operations has a cost that is driven to zero, or near zero at scale. 3-D printing can increase efficiency by outcompeting some kinds of mass production, but it can’t make production costs go away.
An even more basic refutation of Rifkin is: food. Most of the factors of production that bring (say) an ear of corn to your table have a cost floor well above zero. Even just the transportation infrastructure required to get your ear of corn from farm to table requires trillions of dollars of capital goods. Atoms are heavy. Not even “near-zero” marginal cost will ever happen here, let alone zero. (Late in the book, Rifkin argues for a packetized “transportation Internet” – a good idea in its own terms, but not a solution because atoms will still be heavy.)
It is essential to Rifkin’s argument that constantly he fudges the distinction between “zero” and “near zero” in marginal costs. Not only does he wish away capital expenditure, he tries to seduce his readers into believing that “near” can always be made negligible. Most generally, Rifkin’s take on production economics calls to mind the famous Orwell quote: “One has to belong to the intelligentsia to believe things like that: no ordinary man could be such a fool.”
But even putting all those mistakes aside, there is another refutation of Rifkin. In his brave impossible new world of zero marginal costs for goods, who is going to fix your plumbing? If Rifkin tries to negotiate price with a plumber on the assumption that the plumber’s hours after zero have zero marginal cost, he’ll be in for a rude awakening.
The book is full of other errors large and small. The particular offence for which I knew Rifkin before this book – wrong-headed attempts to apply the laws of thermodynamics to support his desired conclusions – reappears here. As usual, he ignores the difference between thermodynamically closed systems (which must experience an overall increase in entropy) and thermodynamically open systems in which a part we are interested in (such as the Earth’s biosphere, or an economy) can be counter-entropic by internalizing energy from elsewhere into increased order. This is why and how life exists.
Another very basic error is Rifkin’s failure to really grasp the most important function of private property. He presents it as only as a store of value and a convenience for organizing trade, one that accordingly becomes less necessary as marginal costs go towards zero. But even if atoms were weightless and human attention free, property would still function as a definition of the sphere within which the owner’s choices are not interfered with. The most important thing about owning land (or any rivalrous good, clear down to your toothbrush) isn’t that you can sell it, but that you can refuse intrusions by other people who want to rivalrously use it. When Rifkin notices this at all, he thinks it’s a bad thing.
The book is a blitz of trend-speak. Thomas Kuhn! The Internet of Things! 3D printing! Open source! Big data! Prosumers! But underneath the glossy surface are gaping holes in the logic. And the errors follow a tiresomely familiar pattern. What Rifkin is actually retailing, whether he consciously understands it that way or not (and he may not), is warmed-over Marxism – hostility to private property, capital, and markets perpetually seeking a rationalization. The only innovation here is that for the labor theory of value he has substituted a post-labor theory of zero value that is even more obviously wrong than Marx’s.
All the indicia of cod-Marxism are present. False identification of capitalism with vertical integration and industrial centralization: check. Attempts to gin up some sort of an opposition between voluntary but non-monetized collaboration and voluntary monetized trade: check. Valorizing nifty little local cooperatives as though they actually scaled up: check. Writing about human supercooperative behavior as though it falsifies classical and neoclassical economics: check. At times in this book it’s almost as though Rifkin is walking by a checklist of dimwitted cliches, ringing them like bells in a carillon.
Perhaps the most serious error, ultimately, is the way Rifkin abuses the notion of “the commons”. This has a lot of personal weight for me, because I have lived in and helped construct a hacker culture that maintains a huge software commons and continually pushes for open, non-proprietary infrastructure. I have experienced, recorded, and in some ways helped create the elaborate network of manifestos, practices, expectations, how-to documents, institutions, and folk stories that sustains this commons. I think I can fairly claim to have made the case for open infrastructure as forcefully and effectively as anyone who has ever tried to.
Bluntly put, I have spent more than thirty years actually doing what Rifkin is glibly intellectualizing about. From that experience, I say this: the concept of “the commons” is not a magic wand that banishes questions about self-determination, power relationships, and the perils of majoritarianism. Nor is it a universal solvent against actual scarcity problems. Maintaining a commons, in practice, requires more scrupulousness about boundaries and respect for individual autonomy rather than less. Because if you can’t work out how to maximize long-run individual and joint utility at the same time, your commons will not work – it will fly apart.
Though I participate in a huge commons and constantly seek to extend it, I seldom speak of it in those terms. I refrain because I find utopian happy-talk about “the commons” repellent. It strikes me as at best naive and at at worst quite sinister – a gauzy veil wrapped around clapped-out collectivist ideologizing, and/or an attempt to sweep the question of who actually calls the shots under the rug.
In the open-source community, all our “commons” behavior ultimately reduces to decisions by individuals, the most basic one being “participate this week/day/hour, or not?” We know that it cannot be otherwise. Each participant is fiercely protective of the right of all others to participate only voluntarily and on terms of their own choosing. Nobody ever says that “the commons” requires behavior that individuals themselves would not freely choose, and if anyone ever tried to do so they would be driven out with scorn. The opposition Rifkin wants to find between Lockean individualism and collaboration does not actually exist, and cannot.
Most of us also understand, nowadays, that attempts to drive an ideological wedge between our commons and “the market” are wrong on every level. Our commons is in fact a reputation market – one that doesn’t happen to be monetized, but which has all the classical behaviors, equilibria, and discovery problems of the markets economists usually study. It exists not in opposition to monetized trade, free markets, and private property, but in productive harmony with all three.
Rifkin will not have this, because for the narrative he wants these constructions must conflict with each other. To step away from software for an instructive example of how this blinds him, the way Rifkin analyzes the trend towards automobile sharing is perfectly symptomatic.
He tells a framing story in which individual automobile ownership has been a central tool and symbol of individual autonomy (true enough), then proposes that the trend towards car-sharing is therefore necessarily a willing surrender of autonomy. The actual fact – that car-sharing is popular mainly in urban areas because it allows city-dwellers to buy more mobility and autonomy at a lower capital cost – escapes him.
Car sharers are not abandoning private property, they’re buying a service that prices personal cars out of some kinds of markets. Because Rifkin is all caught up in his own commons rhetoric, he doesn’t get this and will underestimate what it takes for car sharing to spread out of cities to less densely populated areas where it has a higher discovery and coordination cost (and the incremental value of individual car ownership is thus higher).
The places where open source (or any other kind of collaborative culture) clashes with what Rifkin labels “capitalism” are precisely those where free markets have been suppressed or sabotaged by monopolists and would-be monopolists. In the case of car-sharing, that’s taxi companies. For open source, it’s Microsoft, Apple, the MPAA/RIAA and the rest of the big-media cartel, and the telecoms oligopoly. Generally there is explicit or implicit government market-rigging in play behind these – which is why talking up “the commons” can be dangerous, tending to actually legitimize such political power grabs.
It is probably beyond hope that Jeremy Rifkin himself will ever understand this. I write to make it clear to others that he cannot recruit the successes of open-source software for the anti-market case he is trying to make. His grasp of who we are, his understanding of how to make a “commons” function at scale, and his comprehension of economics in general are all fatally deficient.
When I have to explain how real hackers differ from various ignorant media stereotypes about us, I’ve found that one of the easiest differences to explain is transparency vs. anonymity. Non-techies readily grasp the difference between showing pride in your work by attaching your real name to it versus hiding behind a concealing handle. They get what this implies about the surrounding subcultures – honesty vs. furtiveness, accountability vs. shadiness.
One of my regular commenters is in the small minority of hackers who regularly uses a concealing handle. Because he pushed back against my assertion that this is unusual, counter-normative behavior, I set a bit that I should keep an eye out for evidence that would support a frequency estimate. And I’ve found some.
Recently I’ve been doing reconstructive archeology on the history of Emacs, the goal being to produce a clean git repository for browsing of the entire history (yes, this will become the official repo after 24.4 ships). This is a near-unique resource in a lot of ways.
One of the ways is the sheer length of time the project has been active. I do not know of any other open-source project with a continuous revision history back to 1985! The size of the contributor base is also exceptionally large, though not uniquely so – no fewer than 574 distinct committers. And, while it is not clear how to measure centrality, there is little doubt that Emacs remains one of the hacker community’s flagship projects.
This morning I was doing some minor polishing of the Emacs metadata – fixing up minor crud like encoding errors in committer names – and I made a list of names that didn’t appear to map to an identifiable human being. I found eight, of which two are role-based aliases – one for a dev group account, one for a build engine. That left six unidentified individual contributors (I actually shipped 8 to the emacs-devel list, but two more turned out to be readily identifiable within a few minutes after that).
I’m looking at this list of names, and I thought “Aha! Handle frequency estimation!”
That’s a frequency of just about exactly 1% for IDs that could plausibly be described as concealing handles in commit logs. That’s pretty low, and a robust difference from the cracker underground in which 99% use concealing handles. And it’s especially impressive considering the size and time depth of the sample.
And at that, this may be an overestimate. As many as three of those IDs look like they might actually be display handles – habitual nicknames that aren’t intended as disguise. That is a relatively common behavior with a very different meaning.
Blogging has been light lately because I’ve been up to my ears in reposurgeon’s most serious challenge ever. Read on for a description of the ugliest heap of version-control rubble you are ever likely to encounter, what I’m doing to fix it, and why you do in fact care – because I’m rescuing the history of one of the defining artifacts of the hacker culture.
Imagine a version-control history going back to 1985 – yes, twenty-nine years of continuous development by no fewer than 579 people. Imagine geologic strata left by no fewer than five version-control systems – RCS, CVS, Arch, bzr, and git. The older portions of the history are a mess, with incomplete changeset coalescence in the formerly-CVS parts and crap like paths prefixed with “=” to mark RCS masters of deleted files. There are hundreds of dead tags and dozens of dead branches. Comments and changelogs are rife with commit-reference cookies that no longer make sense in the view through more modern version-control systems.
Your present view of the history is a sort of two-headed monster. The official master is in bzr, but because of some strange deficiences in bzr’s export tools (which won’t be fixed because bzr is moribund) you have to work from a poor-quality read-only git mirror that gets automatically rebuilt from the bzr history every 15 minutes. But you can’t entirely ignore the bzr master; you have to write custom code to data-mine it for bzr-related metadata that you need for fixing references in your conversion.
Because bzr is moribund, your mission is to produce a full standalone git conversion that doesn’t suck. Criteria for “not sucking” include (a) complete changeset coalescence in the RCS and CVS parts, (b) fixing up CVS and bzr commit references so a human being browsing through git can actually follow them, (c) making sense out of the mess that is RCS deletions in the oldest part of the history.
Also, because the main repo is such a disaster area, there is at least one satellite repo for a Mac OS X port that really wants to be a branch of the main repo, but isn’t. (Instead it’s a two-tailed mutant clone of a nine-year old version of the main repo.) You’ve been asked to pull off a cross-repository history graft so that after conversion day it will look as though the whole nine years of OS X port history has been a branch in this repo from the beginning.
Just to put the cherry on top, your customers – the project dev group – are a notoriously crusty lot who, on the whole, do not go out of their way to be helpful. If not for a perhaps surprising degree of support from the project lead the full git conversion wouldn’t be happening at all. Fortunately, the lead groks it is important in order to lower the barrier to entry for new talent.
I have been working hard on this conversion for eight solid weeks. Supporting it has required that I write several major new features in reposurgeon, including a macro facility, large extensions to the selection-set sublanguage, and facilities for generic search-and-replace on both metadata and blobs.
Experiments and debugging are a pain in the ass because the repository is so big and gnarly that a single full conversion run takes around ten hours. The lift script is over 800 lines of complex reposurgeon commands – and that’s not counting the six auxiliary scripts used to audit and generate parts of it, nor an included file of mechanically-generated commands that is over two thousand lines long.
You might very well wonder what could make a repository conversion worth that kind of investment of time and effort. That’s a good question, and one of those for which you either have enough cultural context that a one-word answer will suffice or else hundreds of words of explanation wouldn’t be enough.
The one word is: Emacs.
I just shipped version 1.10 of cvs-fast-export with a new feature: it now emits fast-import files that contain CVS’s default ignore patterns. This is a request for help from people who know CVS better than I do.
I’ve written before about the difference between literal and literary repository translations. When I write translation tools, one of my goals is for the experience of using the converted repository to as though the target system had been in use all along. Notably, if the target system has changesets, a dumb file-oriented conversion from CVS just isn’t good enough.
Another goal is for the transition to be seamless; that is, without actually looking for it, a developer browsing the history should not need to be aware of when the transition happened. This implies that the ignore patterns of the old repository should be emulated in the new one – no object files (for example) suddenly appearing under git status when they were invisible under CVS.
There is one subtle point I’m not sure of, though. and I would appreciate correction from anyone who knows CVS well enough to say. If you specify a .cvsignore, does it add to the default ignore patterns or replace them?
My current assumption in 1.10 is that it adds to them. If someone corrects me on this, I’ll remove a small anount of code and ship 1.11.
Sometimes art imitates life. Sometimes life imitates art. So, for your dubious biographical pleasure, here is my life in tropes. Warning: the TV Tropes site is addictive; beware of chasing links lest it eat the rest of your day. Or several days.
First, a trope disclaimer: I am not the Eric Raymond from Jem. As any fule kno, I am not a power-hungry Corrupt Corporate Executive; if I were going to be evil, it would definitely be as a power-hungry Mad Scientist. Learn the difference; know the difference!
I am, or I like to think of myself as, a Smart Guy who Minored in Badass. I show some tendency towards Boisterous Bruiser, having slightly too physical a presence to fit Badass Bookworm perfectly. And yes, I’m a Playful Hacker who is Proud To Be A Geek.
There’s a documentary, The Hedgehog and the Hare, being made about the prosecution of Andrew Auernheimer (aka “the weev”). The filmmaker wants to interview me for background and context on the hacker culture. The following is a lightly edited version of the backgrounder I sent him so he could better prepare for the interview.
I’ve watched the trailer. I’ve googled “weev” and read up on his behavior and the legal case. The following note is intended to be a background on culture, philosophy, and terminology that will help you frame questions for the face-to-face interview.
Wikipedia describes Andrew Auernheimer as “grey-hat hacker”. There are a lot of complications and implications around that term that bear directly on what “weev” was doing and what he thought he was doing. One good way to approach these is to survey the complicated history of the word “hacker”.
My authority to explain this rests on having edited The New Hacker’s Dictionary, which is generally considered the definitive lexicon of the culture it describes; also How To Become A Hacker which you
should probably read first.
In its original and still most correct sense, the word “hacker” describes a member of a tribe of expert and playful programmers with roots in 1960s and 1970s computer-science academia, the early microcomputer experimenters, and several other contributory cultures including science-fiction fandom.
Through a historical process I could explain in as much detail as you like, this hacker culture became the architects of today’s Internet and evolved into the open-source software movement. (I had a significant role in this process as historian and activist, which is why my friends recommended that you talk to me.)
People outside this culture sometimes refer to it as “old-school hackers” or “white-hat hackers” (the latter term also has some more specific shades of meaning). People inside it (including me) insist that we are just “hackers” and using that term for anyone else is misleading and disrespectful.
Within this culture, “hacker” applied to an individual is understood to be a title of honor which it is arrogant to claim for yourself. It has to be conferred by people who are already insiders. You earn it by building things, by a combination of work and cleverness and the right attitude. Nowadays “building things” centers on open-source software and hardware, and on the support services for open-source projects.
There are – seriously – people in the hacker culture who refuse to describe themselves individually as hackers because they think they haven’t earned the title yet – they haven’t built enough stuff. One of the social functions of tribal elders like myself is to be seen to be conferring the title, a certification that is taken quite seriously; it’s like being knighted.
The first key thing for you to understand is that Andrew Auernheimer is not a member of the (genuine, old school, white-hat) hacker culture. One indicator of this is that he uses a concealing handle. Real hackers do not do this. We are proud of our work and do it in the open; when we use handles, they are display behaviors rather than cloaks. (There are limited exceptions for dealing with extremely repressive and totalitarian governments, when concealment might be a survival necessity.)
Another bright-line test for “hacker culture” is whether you’ve ever contributed code to an open-source project. It does not appear that Auernheimer has done this. He’s not known among us for it, anyway.
A third behavior that distances Auernheimer from the hacker culture is his penchant for destructive trolling. While there is a definite merry-prankster streak in hacker culture, trolling and nastiness are frowned upon. Our pranking style tends more towards the celebration of cleverness through elaborate but harmless practical jokes, intricate technical satires, and playful surrealism. Think Ken Kesey rather than Marquis de Sade.
Now we come to the reason why Auernheimer calls himself a hacker.
There is a cluster of geek subcultures within which the term “hacker” has very high prestige. If you think about my earlier description it should be clear why. Building stuff is cool, it’s an achievement.
There is a tendency for members of those other subcultures to try to appropriate hacker status for themselves, and to emulate various hacker behaviors – sometimes superficially, sometimes deeply and genuinely.
Imitative behavior creates a sort of gray zone around the hacker culture proper. Some people in that zone are mere posers. Some are genuinely trying to act out hacker values as they (incompletely) understand them. Some are ‘hacktivists’ with Internet-related political agendas but who don’t write code. Some are outright criminals exploiting journalistic confusion about what “hacker” means. Some are ambiguous mixtures of several of these types.
Andrew Auernheimer lives in that gray zone. He’s one of its ambiguous characters – part chaotic prankster, part sincere hacktivist, possibly part criminal. The proportions are not clear to me – and may not even be clear to him.
Like many people in that zone, he aspires to the condition of hacker and may sincerely believe he’s achieved it (his first lines in your trailer suggest that). What he probably doesn’t get is that attitude isn’t enough; you have to have competence. A real hacker would reply, skeptically “Show me your code.” Show your work. What have you built, exactly? Nasty pranking and security-breaking don’t count…
Now, having explained what separates “weev” from the hacker culture, I’m going to explain why his claim is not entirely bogus. I can’t consider him a hacker on the evidence I have available, but I’m certain he’s had hacker role models. Plausibly one of them might be me…
His stubborn libertarian streak, his insistence that you can only confirm your rights by testing their boundaries, is like us. So is his belief in the propaganda of the deed – of acting transgressively out of principle as an example to others.
Combine this with a specific interest in changing the world through adroit application of technology and you have someone who is in significant ways very much like us. I think his claim to be a hacker is mistaken and shows ignorance of the full weight and responsibilities of the term, but it’s not crazy. If he wrote code and dropped the silly handle and gave up trolling he might become one of us.
But even though Andrew Auernheimer doesn’t truly seem to be one of us, we don’t have much option but to join in his defense. He’s a shady and dubious character by our standards, but we are all too aware that the kind of vague law and prosecutorial overreach that threw him in jail could be turned against us for doing things that are normal parts of our work.
Sometimes maintaining civil liberties requires rallying around people whose behavior and ethics are questionable. That, I think, sums up how most hackers who are aware of his troubles feel about Andrew Auernheimer.
A few months back I had to do a two-hour road trip with A&D regular Susan Sons, aka HedgeMage, who is an interesting and estimable person in almost all ways except that she actually … likes … country music.
I tried to be stoic when stupid syrupy goo began pouring out of the car radio, but I didn’t do a good enough job of hiding my discomfort to prevent her from noticing within three minutes flat. “If I leave this on,” she observed accurately to the 11-year-old in the back seat, “Eric is going to go insane.”
Since said 11-year more or less required music to prevent him from becoming hideously bored and restive, all three of us were caught between two fires. Susan, ever the pragmatist, went looking through her repertoire for pieces I would find relatively inoffensive.
After a while this turned into a sort of calibration exercise – she’d put something on, assay my reaction to see where in the range it fell between mere spasmodic twitching and piteous pleas to make it stop, and try to figure what the actual drive-Eric-insane factors in the piece were.
After a while a curious and interesting pattern emerged…
I already knew of having some preferences in this domain. I dislike anything with steel guitars in it; conversely, I am less repelled by and can sometimes even enjoy subgenres like bluegrass, fiddle music and Texas swing that are centered on other instruments. I find old-style country, closer to its Irish traditional roots, far easier to take than the modern Nashville sound. Blues influence also helps.
But it turns out that most of these preferences are strongly correlated with one very simple binary-valued property, something Susan had the domain knowledge to identify consciously after a sufficient sample but I did not.
It turns out that what I hate above all else about country music is singers with faked accents.
I had no idea, but there’s a lot of this going around, apparently. The rules of the modern country idiom require performers who don’t naturally speak with a thick Southern-rural accent to affect one when they sing. The breakthrough moment when we figured out that this was what was making me want to chew my own leg off to escape it was when she cued up a song by some guy named Clint Black who really natively has that accent. We discovered that even though he plays the modern Nashville sound, the result only makes me feel mildly uncomfortable, as opposed to tortured.
The first interesting thing about this is that I was completely unaware that I had been reacting to the fake/nonfake distinction. But once we recognized it, the entire pattern of my subgenre preferences made sense. Duh, of course I’d have had less unpleasant experiences with styles that are less vocal-centered. And, in general, the longer ago a piece of country music was recorded, the more likely that the singers’ accents were genuine.
I think it is even quite likely that I acquired a conditioned dislike of steel guitars precisely because they are strongly co-morbid with fake accents.
It is not news that there is something distinctly unusual about the way I acquire and process language phonology: recently, for example, I wrote about having absorbed the phonology of German even though I don’t speak it, and I have previously noted the fact that I pick up speech accents very quickly on immersion (sometimes without intending to).
But this only raises more questions that belong under the “brains are weird” category. One group: what in the heck is my recognition algorithm for “fake accent”? How did I learn one? Why did I learn one? What in the hell does my unconscious mind find useful about this?
A second is: how reliable is it? We think, from Susan’s sample of a couple dozen tracks, that it’s pretty robust, at least relative to her knowledge about singer idiolects. But in a controlled experiment in which I was trying to spot fakes, how much better would I do than chance? What would my rates of false negatives and false positives be? The question is trickier than it might appear; conscious attempts to run the fake-accent recognizer might interfere with it.
The third, and in some ways the most interesting: How did my fake-accent recognizer get tangled up with my response to music? They do communicate (nobody doubts that people with good pitch discrimination have an advantage in acquiring tonal languages) but they’re different brain subsystems; the organ of Broca doesn’t do music.
Does anyone in my audience know of research that might bear on these questions?
UPDATE: My commenters were insightful about this one and we’ve arrived at a theory that fits the observed facts. I now think what I am reacting to is severe exaggeration of dialect recognition features; this fits with the fact that I find spoken accent mockery in comedy unpleasant. The visceral quality of my reaction may be explained by superstimulation of my “You’re a liar!” social-deception circuitry.
So, here you are in your starship, happily settling into orbit around an Earthlike world you intend to survey for colonization. You start mapping, and are immediately presented with a small but vexing question: which rotational pole should you designate as ‘North’?
There are a surprisingly large number of ways one could answer this question. I shall wander through them in this essay, which is really about the linguistic and emotive significance of compass-direction words as humans use them. Then I shall suggest a pragmatic resolution.
First and most obviously, there’s magnetic north. Our assumption ‘the planet is Earthlike’ entails a nice strong magnetic field to keep local carbon-based lifeforms from getting constantly mutated into B-movie monsters by incoming charged particles. Magnetic north is probably going to be much closer to one pole than the other; we could call that ‘North’.
Then there’s spin-axis north. This is the assignment that makes north relate to the planet’s rotation the same way it does on Earth – that is, it implies the sun setting in the west rather than the east. Not necessarily the same as magnetic north; I don’t know of any reason to think planetary magnetic fields have a preferred relationship to the spin axis.
Next, galactic north. Earth’s orbital plane is inclined about 26% from the rotational plane of the Milky Way, which defines the Galaxy’s spin-axis directions; these have been labeled ‘Galactic North” and “Galactic South” in accordance with the Earth rotational poles they most closely match. On our new planet we could flip this around and define planetary North so it matches Galactic North.
Finally there’s habitability north. This one is fuzzier. More than 3/4ths of earth’s population lives in places where north is colder and south is warmer. We might want to choose ‘North’ to preserve that relationship, which is embedded pretty deeply in the language and folklore of most of Earth’s cultures. Thus, ‘North’ should be the hemisphere with the most habitable land. (Or, if you’re taking a shorter-term view, the hemisphere in which you drop your first settlement. But let’s ignore that complication for now.)
If all four criteria coincide, happiness. But how likely is that? They’re probably distributed randomly with respect to each other, which means we’ll probably get perfect agreement on only one in every sixteen exoplanets.
But not all these criteria are equally important. Magnetic North really only matters to geophysicists and compass-makers. Galactic North is probably interesting only to stargazers.
I think we have a clear winner if spin-axis north coincides with habitability north. This choice will preserve continuity of language pretty well. If they’re opposite, and galactic north coincides with magnetic north, that’s a tiebreaker. If the tiebreakers don’t settle it, I’d go with spin-axis north.
But reasonable people could differ on this. Discuss; maybe we could submit a proposal to the IAU.
That is the title of a paper attempting to explain (away) the 17-year nothing that happened while CAGW models were predicting warming driven by increasing CO2. CO2 increased. Measured GAT did not.
Here’s the money quote: “The most recent climate model simulations used in the AR5 indicate that the warming stagnation since 1998 is no longer consistent with model projections even at the 2% confidence level.”
That is an establishment climatologist’s cautious scientist-speak for “The IPCC’s anthropogenic-global-warming models are fatally broken. Kaput. Busted.”
I told you so. I told you so. I told you so!
I even predicted it would happen this year, yesterday on my Ask Me Anything on Slashdot. This wasn’t actually brave of me: the Economist noticed that the GAT trend was about to fall to worse than 5% fit to the IPCC models six months ago.
Here is my next prediction – and remember, I have been consistently right about these. The next phase of the comedy will feature increasingly frantic attempts to bolt epicycles onto the models. These epicycles will have names like “ENSO”, “standing wave” and “Atlantic Oscillation”.
All these attempts will fail, both predictively and retrodictively. It’s junk science all the way down.
The responses to my previous post, on the myth of the fall, brought out a lot of half-forgotten lore about pre-open-source cultures of software sharing.
Some of these remain historically interesting, but hackers talking about them display the same tendency to back-project present-day conditions I was talking about in that post. As an example, one of my regular commenters inferred (correctly, I think) the existence of a software-sharing community around ESPOL on the B5000 in the mid-1960s, but then described it as “proto-open-source”
I think that’s an easy but very misleading description to land on. In the rest of this post I will explain why, and propose terminology that I think makes a more useful set of distinctions. This isn’t just a historical inquiry, but relevant to some large issues of the present and future.
For those of you who came in late, the B5000 was an early-to-mid-1960s Burroughs mainframe that had a radically unusual trait for the period; its OS was written not in assembler but in a high-level language, a dialect of ALGOL called ESPOL that was extended so it could peek and poke the machine hardware.
B5000 sites could share source-code patches for their operating system, the MCP or Master Control Program (yes, Tron fans, it was really called that!) that were written in a high-level language and thus relatively easy to modify. To the best of my knowledge, this is the only time such a thing was done pre-Unix.
But. Like the communities around SHARE (IBM mainframe users) and DECUS (DEC minicomputers) in the 1960s and 1970s, whatever community existed around ESPOL was radically limited by its utter dependence on the permissions and APIs that a single vendor was willing to provide. The ESPOL compiler was not retargetable. Whatever community developed around it could neither develop any autonomy nor survive the death of its hardware platform; the contributors had no place to retreat to in the event of predictable single-point failures.
I’ll call this sort of community “sharecroppers”. That term is a reference to SHARE, the oldest such user group. It also roughly expresses the relationship between these user groups and contributors, on the one hand, and the vendor on the other. The implied power relationship was pretty totally asymmetrical.
Contrast this with early Unix development. The key difference is that Unix-hosted code could survive the death of not just original hardware platforms but entire product lines and vendors, and contributors could develop a portable skillset and toolkits. The enabling technology – retargetable C compilers – made them not sharecroppers but nomads, able to evade vendor control by leaving for platforms that were less locked down and taking their tools with them.
I understand that it’s sentimentally appealing to retrospectively sweep all the early sharecropper communities into “open source”. But I think it’s a mistake, because it blurs the importance of retargetability, the ability to resist or evade vendor lock-in, and portable tools that you can take away with you.
Without those things you cannot have anything like the individual mental habits or collective scale of contributions that I think is required before saying “an open-source culture” is really meaningful.
This is not just a dusty historical point. We need to remember it in a world where mobile-device vendors (yes, I’m looking at you, Apple!) would love nothing more than to lock us into walled gardens of elaborate proprietary APIs, tools, and languages.
Yes, you may be able to share source code with others in environments like that, but you can’t move what you build to anywhere else. Without that ability to exit, developers and users have only an illusion of control; all power naturally flows to the vendor.
No open-source culture can flourish or even survive under those conditions. Keeping that in mind is the best reason to be careful about our terminology.
Everybody knows, or should know, the basic rules of firearms safety. (a) Always treat the weapon as if loaded, (b) Never point a firearm at anything you are not willing to destroy, (c) keep your finger off the trigger until you are ready to shoot, (d) be sure of your target and what is beyond it. (These are sometimes called “Cooper’s Rules” after legendary instructor Col. Jeff Cooper. There are several minor variants of the wording.)
If you follow these rules, you will never unintentionally injure anyone with a firearm. They are easy to learn and very safe. They are appropriate for civilians.
Some elite military units have different rules, with a different tradeoff between safety and combat effectiveness. I learned them from an instructor who was ex-SOCOM. The way I learned them is sufficiently amusing that the story deserves retelling.
The instruction began in the following way. Imagine several students sitting in a circle in camp chairs, the instructor almost directly across from me. Note that this was after we had learned and practiced the basic Cooper rules I described above.
The instructor began by clearing a pistol (opening the chamber port so we could see there was no bullet there or ready in the magazine) and letting the slide drop until the port was closed.
He handed me the pistol, looked at me with a slight smile, and said “Eric. Please shoot yourself through the head.”
I thought for a second, grinned, pointed the pistol at my temple, and pulled the trigger. There was a click and shocked gasps from some other students. (The gasps meant they had learned civilian rules correctly. I believe testing this was part of the instructor’s intention.)
The instructor then asked for the pistol back. I handed to him. He fiddled with it for a moment, passed it behind his back, brought it into view, offered it to me with the chamber port closed, and said again “Eric. Please shoot yourself through the head.”
I said “No, sir, I will not.”
His smile got a little wider. “Oh? And why not?”
I said “Because the weapon was out of my sight for a moment and I do not know that it is not ready to fire.” (My exact words may have been slightly different. That was the sense.)
“That was the correct answer,” he said, and proceeded to explain to all of us that elite military units must frequently carry weapons in a combat-ready state, and therefore train safety under different rules that require fighters to reason about when a firearm is in a dangerous condition.
In that exchange I violated Cooper’s Rules (a) and (b). I was thinking like a warrior who must frequently carry weapons in a ready-to-fire condition (because he can’t count on having the time to ready the weapon in a clutch situation) and knows that the warriors around him are trained to do likewise.
I’ll never forget those few minutes, because they taught all of us a valuable lesson. Also because we did not prearrange this! The instructor paid me a notable compliment by assuming that I would respond correctly both in obeying his first order and disobeying his second – and, if you think about it, there was a normative lesson there about intelligent initiative, cooperation and responsibility that goes far beyond the specific context of firearms safety.
UPDATE: Post title changed from “Military rules” because this is a story about how special-ops fighters (“operators” in military jargon) think and react.
I was a historian before I was an activist, and I’ve been reminded recently that a lot of younger hackers have a simplified and somewhat mythologized view of how our culture evolved, one which tends to back-project today’s conditions onto the past.
In particular, many of us never knew – or are in the process of forgetting – how dependent we used to be on proprietary software. I think by failing to remember that past we are risking that we will misunderstand the present and mispredict the future, so I’m going to do what I can to set the record straight.
Some blurriness about how things were back then is understandable; it can sometimes take a bit of effort even for those of us who were there in elder days to remember what it was like before PCs, before the Internet, before pixel-addressable color displays, before ubiquitous version-control systems. And there were so few of us back then – when I first found the Jargon File around 1978 you could fit every hacker in the U.S. in a medium-sized auditorium, and if you were willing to pack the aisles probably every hacker in the world.
A larger and subtler change, the one easiest to forget, is how dependent we were on proprietary technology and closed-source software in those days. Today’s hacker culture is very strongly identified with open-source development by both insiders and outsiders (and, of course, I bear some of the responsibility for that). But it wasn’t always like that. Before the rise of Linux and the *BSD systems around 1990 we were tied to a lot of software we usually didn’t have the source code for.
Part of the reason many of us tend to forget this is mythmaking by the Free Software Foundation. They would have it that there was a lost Eden of free software sharing that was crushed by commercialization in the late 1970s and early 1980s. This narrative projects Richard Stallman’s history at the MIT AI Lab on the rest of the world. But, almost everywhere else, it wasn’t like that either.
One of the few other places it was almost like that was early Unix development from 1976-1984. They really did have something recognizably like today’s open-source culture, though much smaller in scale and with communications links that were very slow and expensive by today’s standards. I was there during the end of that beginning, the last few years before AT&T’s failed attempt to lock down and commercialize Unix in 1984.
But the truth is, before the early to mid-1980s, the technological and cultural base to support anything like what we now call “open source” largely didn’t exist at all outside of those two settings. The reason is brutally simple: software wasn’t portable!
You couldn’t do what you can do today, which is write a program in C or Perl or Ruby or Python with the confident expectation that it will run on multiple architectures. My
first second full-time job writing code, in 1980, was representative for the time: writing communications software on a TRS-80 in Z-80 assembler. Assembler, people!. We wrote a lot of it. Until the early 1980s, programming in high-level languages was the exception rather than the rule. In general, you couldn’t port that stuff!
Not only was portability across architectures a near-impossible dream, you often couldn’t port between instances of the same machine without serious effort. Especially on larger machines, code tended to be intertwined with details of individual site configuration to an extent that would shock people today (IBM JCL was notoriously the worst offender, but by no means the only).
In that kind of environment, arguing about whether code should be redistributable in general was next to pointless, because unless the new machine was specifically designed to be binary-compatible with the old, ports amounted to being re-implementations anyway.
This is why the earliest social experiments in what we would now call “open source” – at SHARE and DECUS – were restricted to individual vendors’ product lines and (often) to individual machine types. And it’s why the cancellation of the PDP-10 follow-on in 1983 was such a disaster for the MIT AI Lab and SAIL and other early hacker groups. There they were, stuck, having folded huge amounts of time and genius into a huge pile of 10 assembler code and no real possibility that it would ever be useful again. And this was normal.
The Unix guys showed us the way out, by (a) inventing the first non-assembler language really suitable for systems programming, and (b) proving it by writing an operating system in it. But they did something even more fundamental — they created the modern idea of software systems that are cleanly layered and built from replaceable parts, and of re-targetable development tools.
Tellingly, Richard Stallman had to co-opt Unix technology in order to realize his vision for the Free Software Foundation. The MIT AI Lab itself never found its way to that new world. There’s a reason the Emacs text editor is the only software artifact of that culture that survives to us, and it had to be rewritten from the ground up on the way. (Correction: A symbolic-math package called MACSYMA also survives, though in relative obscurity.)
Without the Unix-spawned framework of concepts and technologies, having source code simply didn’t help very much. This is hard for younger hackers to realize, because they have no experience of the software world before retargetable compilers and code portability became relatively common. It’s hard for a lot of older hackers to remember because we mostly cut our teeth on Unix environments that were a few crucial years ahead of the curve.
But we shouldn’t forget. One very good reason is that believing a myth of the fall obscures the remarkable rise that we actually accomplished, bootstrapping ourselves up through a series of technological and social inventions to where open source on everyone’s desk and in everyone’s phone and ubiquitous in the Internet infrastructure is now taken for granted.
We didn’t get here because we failed in our duty to protect a prelapsarian software commons, but because we succeeded in creating one. That is worth remembering.
The Dark Enlightenment is, as I have previously noted, a large and messy phenomenon. It appears to me in part to be a granfalloon invented by Nick Land and certain others to make their own piece of it (the neoreactionaries) look larger and more influential than it actually is. The most detailed critiques of the DE so far (notably Scott Alexander’s Reactionary Philosophy in an Enormous, Planet-Sized Nutshell and Anti-Reactionary FAQ nod in the direction of other cliques on the map I reproduced but focus pretty strongly on the neoreactionaries.
Nevertheless, after we peel away clear outliers like the Techno-Commercial Futurists and the Christian Traditionalists, there remains a “core” Dark Enlightenment which shares a discernibly common set of complaints and concerns. In this post I’m going to enumerate these rather than dive deep into any of them. Development of and commentary on individual premises will be deferred to later blog posts.
(I will note the possibility that I may in summarizing the DE premises be inadvertently doing what Scott Alexander marvelously labels “steelmanning” – that is, reverse-strawmanning by representing them as more logical and coherent than they actually are. Readers should be cautious and check primary sources if in doubt.)
Complaint the first: We are all being lied to – massively, constantly, systematically – by an establishment that many DE writers call “the Cathedral”. Its power is maintained by inculcation in the masses of what a Marxist (but nobody in the DE, ever, except ironically) would call “false consciousness”. The Cathedral’s lies go far deeper than what most people think of as normal tactical political falsehoods or even conspiracy theories, down to the level of some of the core premises of post-Enlightenment civilization and widely cherished beliefs about the sustainability of racial equality, sexual equality, and democracy.
An interesting feature of the DE is how remarkably little conspiracy theorizing there is in it. Instead, DE thinkers tend to describe the Cathedral as what I have elsewhere called a “prospiracy”. The Cathedral is bound together not by a hierarchy of internal control and explicit membership; rather, it runs on a shared set of ideological premises not all of which are held or even completely understood by the people who act as part of it.
To a first approximation, the ideology of the Cathedral can be described as “leftist” (many DE writers use the term “Progressive”, not meaning it as a compliment). However, the DE analysis of Cathedral ideology is actually much more complex and less reductive than these terms might imply (a point on which I expect to expand in later posts).
I will note, by the way, the known backgrounds of several key DE thinkers creates grounds to suspect that my own critical use of “Cathedral” in connection with software engineering had some influence on the DE terminology. I do not particularly claim this as an accomplishment, but there it is.
Complaint the second: “All men are created equal” is a pernicious lie. Human beings are created unequal, both as individuals and as breeding populations. Innate individual and group differences matter a lot. Denying this is one of the Cathedral’s largest and most damaging lies. The bad policies that proceed from it are corrosive of civilization and the cause of vast and needless misery.
Another way the DE puts this complaint is that nobody on the conventional political spectrum takes Darwinism seriously enough. Left-liberals self-identify as the friends of evolution out of a desire to be “on the side of science”, but if they really understood the implications of evolutionary biology and psychology they would be more horrified by them than Christian fundamentalists are.
The emphasis on this complaint is probably the single feature which most distinguishes the DE from other kinds of conservatism and anti-left-wing reaction. I’ll be writing about it at more length because I think it is the most interesting and challenging part of the DE critique.
While I don’t intend to do that here and now, I cannot exit this summary without acknowledging that many people will read this complaint as a brief for racism. In fact the DE itself contains two relatively distinguishable cliques that have processed this complaint in different ways: the Ethno-Nationalists and the Human Bio-Diversity people – in DE jargon, eth-nats and HBD for short.
If you come to the DE looking for straight-up old-fashioned racism, the Ethno-Nationalists will supply your requirement as hot and hateful as you like. The HBD people, on the other hand, are interested in value-neutral Damned Facts. They trade not in invective but in the nuts and bolts of psychometry and behavioral genetics. A signature consequence of the difference is that European-descended white people don’t necessarily come off “best” in the comparisons they make.
Complaint the Third: Democracy is a failure. It has produced a race to the bottom in which politicians grow ever more venal, narrow interest groups ever more grasping, the function of government increasingly degenerates into subsidizing parasites at the expense of producers, and in general politics exhibits all the symptoms of what I have elsewhere called an accelerating Olsonian collapse (after Mancur Olson’s analysis in The Logic Of Collective Action).
If this sounds like a libertarian critique, it in many ways is. One of my commenters noted, astutely, that the DE bears the imprint of Hans-Hermann Hoppe’s libertarian polemic Democracy: The God That Failed. Some of the leading DE thinkers describe themselves as ex-libertarians, but their thinking has often taken some very dark and strange anti-libertarian turns since. (I’ll have more to say about this in discussing Mencius Moldbug, who is worth a post all to himself).
Note to commenters: Please do not dive into attacking or defending these premises; that will be appropriate when I discuss them individually. Appropriate discussion for this post is whether I have missed major premises or gotten these wrong in any significant way.
I expect future posts in this series to include both a closer focus on individual premises ansd on individual cliques within the Dark Enlightenment.