Blogging will be limited for the next week.
I’ve received several requests for posts on a bunch of meaty topic, including (a) Adobe’s Creative Cloud move, (b) The Defence Distributed takedown notice, (b) the utility of power-projection navies, (d) current state of the terror war, and others. I won’t get to all of these anytime soon, because I’m swamped with work and will be travelling today to an undisclosed city for a meeting I can’t talk about yet.
Sorry to go all international-man-of-nystery on everybody but all will be revealed later this year. It will have been worth the wait.
Congratulations, Adobe, on your impending move from selling Photoshop and other boring old standalone applications that people only had to pay for once to a ‘Creative Cloud’ subscription service that will charge users by the month and hold their critical data hostage against those bills. This bold move to extract more revenue from customers in exchange for new ‘services’ that they neither want nor need puts you at the forefront of strategic thinking by proprietary software companies in the 21st century!
It’s genius, I say, genius. Well, except for the part where your customers are in open revolt, 5000 of them signing a petition and many others threatening to bail out to open-source competitors such as GIMP.
Fifteen years ago I pointed out in The Cathedral and the Bazaar and it sequels that buying proprietary software puts you at the wrong end of a power relationship with its vendor. And that this relationship will almost always evolve in the direction of more control by the vendor, more rent extraction from your wallet, and harder lock-in. Adobe’s move illustrates this dynamic perfectly.
But the response from its customer base highlights something else that has happened in those 15 years; open-source applications like the GIMP, and the open-source operating systems they run on, actually offer users a practical way out of these increasingly abusive relationships. Adobe’s customers aren’t being shy about pointing this out, and the company is going to feel heat that it wouldn’t have before 1998.
It’s not clear which side will back down in this particular confrontation. But the underlying trend curves are obvious; even if Adobe wins this time, sooner or later the continuing increases in the rent Adobe needs to claw out of its customers are going to exceed the customers’ transition costs to get out of Adobe’s jail.
The problem is fundamental; one-time purchase payments can’t cover unbounded downstream support and development costs. They can only even appear sufficient when your market is expanding rapidly and you can always use today’s new revenue to cover support costs from last year’s sales. This stops working when your markets near saturation; you have to somehow move customers to a subscription model to survive.
But doing that doesn’t solve an even more fundamental problem, which is that the stock market doesn’t actually reward constant returns any more; it wants an expectation of rising ones in order to beat the net-present-value discount curve. Thus, in a near-saturated market, the amount of rent you extract per customer has to perpetually increase.
But what can’t go on forever won’t. Eventually you’ll have to squeeze your customers so hard that they bolt. This may be happening to Adobe now, or it could take a few more turns of the screw. But it will happen. And as with Adobe, so with all other proprietary software.
A few weeks ago I blogged an alternate-history story in which the First Amendment of the U.S. Constitution was abused and distorted in the same ways the Second Amendment has been in our history. The actual point of the essay, though, was not about either amendment; it was about how strategic deception by one side of a foundational political dispute can radicalize the other and effectively destroy the credibility of moderates as well.
Now comes the news that the head of the Department of Homeland Security officially thanked the Governor of Missuri for violating state law by illegally passing to the DHS Missouri’s list of concealed-carry permit holders. The Governor then lied about his actions.
The Feds, meanwhile, continue to illegally retain transfer records from federally licensed firearms dealers past the statutory time limit, among several other continuing violations of a 1986 law forbidding the establishment of a national gun registry.
The BATF also criminally violated its authorizing laws by transferring over 2000 firearms to Mexican drug gangs through illegal straw purchases (google “ATF gunwalking scandal”). Over 150 Mexican citizens and United States Border Patrol Agent Brian Terry were killed with these guns.
Meanwhile, following scandals about “drop guns” at the sites of police shootings, some big-city police forces (notably in LA and NYC) are strongly suspected of routinely using planted guns to frame suspects they can’t otherwise nail on firearms-possession charges.
Any trust that “gun control” will be administered with even minimal respect for civil rights is long gone, destroyed by the behavior of the enforcers themselves.
This is yet another way to destroy the middle. Anti-firearms activists speak of “common-sense regulation”, knowing that the agencies enforcing these have engaged in a series of criminal conspiracies to evade and ignore safeguards against abuse of such regulations. By doing so, they annihilate any trust firearms owners might have once felt that “common-sense regulation” is anything other than a prequel to those abuses.
In the absence of trust there can be no compromise. This is how you radicalize gun owners into the Second-Amendment absolutists most of us are today. After four decades of bad faith the only position left to us is “No more ‘gun-control’ laws. Ever.”
In my experience, moral panics are almost never about what they claim to be about. I am just (barely) old enough to remember the tail end of the period (around 1965) when conservative panic about drugs and rock music was actually rooted in a not very-thinly-veiled fear of the corrupting influence of non-whites on pure American children. In retrospect it’s easy to understand as a reaction against the gradual breakdown of both legally enforced and de-facto racial segregation in the U.S.
But moral panics are by no means a monopoly of cultural conservatives. These days the most virulent and bogus examples are as likely to arrive from the self-described “left” as the “right”. When they do, they’re just as likely to be about something other than the ostensible subject.
In Lies, Damn Lies, and Rape Statistics a college newspaper does a little digging through U.S. crime statistics and finds that the trendy “anti-rape” movement is exaggerating the rape risk of college women by two full orders of magnitude – as it concludes, “the ‘one in four’ chant should be abandoned and replaced with the more appropriate, albeit less catchy, 1 in 400.”
What can explain such gross distortion? I’ve looked into this issue myself and discovered a lot of flim-flam. Still, even the the best-case figures I arrived at apparently overestimated the actual risk on campuses by a factor of 50. (Barbarian zones – like, say, inner-city Detroit – might be a different story.)
If the rape panic runs parallel to the the now nearly forgotten drugs-and-rock panics of the 1950s and 1960s (and many others like them, before and after) we should expect it to actually be be rooted in an attempt to assert control of or cultural dominance over some threatening Other. And there is indeed evidence that points in that direction.
Recently, Meg Lanker-Simmons, a left-wing activist at the University of Wyoming, faked a rape threat. The agenda seemed obvious: smear Republicans, confirm feminist narratives about male hostility to ‘uppity’ women, confirm women as morally superior creatures who rightfully dictate the content and style of male behavior.
This, together with the crazy inflation of rape statistics, suggests that the campus “anti-rape” movement has little or nothing to do with preventing rape. It has become an instrument of the sort of political warfare in which truth is most likely to be the first casualty.
We’ve seen this sort of thing before, of course. Playing the “racism” card has become such a cliche of left politics that even the reliably lefty Jon Stewart now spoofs it as overdone and busted. In that case the threatening Other is working-class white men, especially rural and most especially Southern, and the aim is clearly to prevent them from pushing back against the culture and politics of elite bicoastal left-liberals.
But there’s actually something a bit more puzzling about the campus-rape panic. College campuses are far from a threatening environment for feminists. Nowadays women outnumber men in every department outside STEM fields. At many colleges mandatory ‘sensitivity training’ heavily privileges female and feminist perspectives. By federal encouragement, female students can now accuse men of rape and expect the claim to be evaluated under circumstances that deny the man any right to due process and the presumption of innocence.
On campus, the Other seems so thoroughly controlled that some academics now attribute declining male enrollments to an unwillingness to enter a hostile work environment. What are women like Meg Lanker-Simmons really pushing against? What in their environment do they not already own?
I think the answer is…themselves. The increasing intensity level of the campus-rape panic seems well correlated with the erosion of college womens’ position in sexual bargaining.
The key concept here is hypergamy: womens’ wired-in desire to mate with men who are taller, smarter, richer, a little older, and higher-status than they are. Hypergamy is at the core of the human female mating strategy in exactly the way that seeking physical attractiveness (signs of fitness to bear) is at the center of male strategy.
An increasing number of hypergamically-aspiring college women are competing for a decreasing pool of higher-status male peers. The consequences are well documented; in the “hookup” culture that now pervades many campuses, sex has become a woman’s opening bid rather than a prize men must compete strenuously to attain. This was a more or less inevitable result once premarital sex stopped being strongly tabooed and the campus sex-ratio flipped over to majority female.
It is not surprising that women like Lanker-Simmons should resent this situation, because it’s almost exactly the reverse of the instinctively K-type mating strategy common to females in humans and most other mammalian species. It’s sex on male r-type terms, and women have DNA going clear back to the Cretaceous that pushes against it.
(This logic also implies that today’s campuses should be among the last places to expect rapes rather than the first. I’ll leave that demonstration as a very simple exercise for the reader.)
This Other, alas, will not be so easily banished. To reverse the dynamic, one of the following things would have to happen:
(1) Premarital sex again becoming strongly enough tabooed that effectively all women cannot offer it as an opening bid. (It has to be effectively all; otherwise the defectors get a large enough advantage in competing for men to make the withholding strategy unstable for the rest. We’ve seen this movie before.)
(2) Sex ratios on campus flip back to a large enough majority of males so that each woman has multiple hypergamic targets who must compete for her. Under these circumstances “not till we’re married” becomes viable again.
(3) Women as a group revert to having much less economic autonomy and social power than men – enough less, anyway, that almost any nominal SES peer or near-peer is a hypergamic target. There’s a tradeoff between this and move 2; the fewer males there are in the nearly-peer population, the more status and autonomy women must implicitly sacrifice to have a constant number of eligible hypergamic targets.
I leave the reader to imagine the screams of rage that would issue from feminists if any of these were even seriously proposed, let alone attempted. And I am not actually advocating any of them, just pointing out that women like Meg Lanker-Simmons are caught in a trap that has nothing to do with (mythically) rape-minded men and everything to do with the world easy contraception and feminist ideology have given us.
I think that underneath the obvious political maneuvering, screaming about a nonexistent rape pandemic is a displacement activity. Campus feminists do it because confronting their actual powerlessness and the jaws of the dilemma that created it would be too painful for them to face.
At bottom, the problem is that female hypergamic instinct and the ideology of sexual equality are inevitably in collision. (Men don’t have the symmetrical problem because their instinctive mating strategy is to just bang women who turn them on physically without regard for differential status.) Short of genetically re-engineering humans to change their mating instincts, there is probably no fix for this.
Of course the implications of this logic go way beyond college campuses. It’s a fundamentally tragic situation and I don’t know what we as a culture or a species are going to do about it.
One thing I am sure of is that displaced moral panic and silly, counterfactual yabbering about “rape culture” will not solve the problem.
My blogging will be light or nonexistent over the next week. I’m on the road in Michigan, at Penguicon; the Friends of Armed and Dangerous party will be here at 9:00 tonight.
It really is the 21st century. Yesterday I merged a bunch of patches, ran acceptance tests, and then polished and shipped a reposurgeon release – while in the passenger seat of a car tooling down I-80. The remarkable thing is that this no longer seems remarkable.
I discovered in the process that while i3 is the best thing since sliced bread on a 2560×1440 display, a tiling window manager is pretty uncomfortable on a laptop-sized 1366×768 display. The problem is that even dividing the laptop screen only in half produces shell and Emacs windows that are narrower than their natural 80-column size rather than wider as on the larger display; one gets the text in email and source code wrapping unpleasantly. I’ve fallen back to XFCE for laptop use.
In two hours, Geeks With Guns. Going to be a full day.
Tamerlan Tsarnaev, the terrorist who died in a firefight with the Boston police with a kettle bomb strapped to him, had a YouTube page. Examining an image of it, I found an approving link to a movie titled “The Black Flags of Khorasan”.
Because, unlike the politically-correct idiots who infest our nation’s newsrooms, I’ve actually studied the history of Islam in some detail, that title had immediate resonance for me. I thought I knew what it meant, and I googled.
What I found confirmed my hunch. Not just that Black Flags from Khorasan is a jihadist propaganda movie, but that it’s a jihadi movie of a particularly interesting kind – Mahdist, and almost certainly radical Shi’a. Mahdism is present in Sunni but much less central, and in any case the region of Khorasan has been the heart country of Shi’a for nearly a thousand years.
Domestic terrorism, my ass. As usual, the mainstream media was slavering to pin this on some Richard-Jewell-like native-born conservative (bonus points if they get to say “Tea Party”). As usual, it’s a jihadi atrocity in which fundamentalist Islam was causal.
But that film is a more specific clue. If the investigators have even a microgram of brains, they’re looking for an Iranian connection now.
Here’s a thought experiment for you. Imagine yourself in an alternate United States where the First Amendment is not as a matter of settled law considered to bar Federal and State governments from almost all interference in free speech. This is less unlikely than it might sound; the modern, rather absolutist interpretation of free-speech liberties did not take form until the early 20th century.
In this alternate America, there are many and bitter arguments about the extent of free-speech rights. The ground of dispute is to what extent the instruments of political and cultural speech (printing presses, radios, telephones, copying machines, computers) should be regulated by government so that use of these instruments does not promote violence, assist criminal enterprises, and disrupt public order.
The weight of history and culture is largely on the pro-free-speech side – the Constitution does say “Congress shall make no law … prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press”. And until the late 1960s there is little actual attempt to control speech instruments.
Then, in 1968, after a series of horrific crimes and assassinations inspired by inflammatory anti-establishment political propaganda, some politicians, prominent celebrities, and public intellectuals launch a “speech control” movement. They wave away all comparisons to Nazi Germany and Soviet Russia, insisting that their goal is not totalitarian control but only the prevention of the most egregious abuses in the public square.
So strong is public revulsion against the violence of 1968 that the first prohibition on speech instruments passes rapidly. The dissidents used slow, inexpensive hand-cranked mimeograph machines and hand presses to spread their poison; these “Saturday Night Specials” are banned. Slightly more capable printers still inexpensive enough to be owned by individual citizens are made subject to mandatory registration.
A few civil libertarians call out warnings but are dismissed as extremists and generally ignored. Legitimate media and publishing corporations, assured by speech-control activists that their presses will not be affected by any measures the speech-control movement has in mind, raise little protest themselves.
Strangely, the ban on Saturday Night Specials fails to reduce the ills it was intended to address. Violent dissidents and criminals, it seems, find little difficulty in stealing typewriters, copiers, and more expensive printing equipment – none of it subject to registration.
The speech-control movement insists that stricter laws regulating speech instruments are the answer. By about 1970 convicted felons are prohibited from owning typewriters. A few years later all dealers in printing supplies, telephones, radios, and other communication equipment are required to have federal licenses as a condition of business, and are subject to government audits at any time. The announced intention of these laws is to prevent dangerous speech instruments from falling into the hands of criminals and madmen.
In 1976 the National Writers’ Association, previously a rather somnolent social club best known for sponsoring speed-typing contests, is taken over in a palace coup by a insurgent gang of pro-free-speech radicals. They display an unexpected flair for grass-roots organization, and within five years have developed a significant lobbying arm in Washington D.C. They begin pushing back against speech-instrument restrictions.
But the speech-control movement seems to be winning most of the battles. In 1986 ownership of automatic so-called “class 3″ press equipment is banned except for federally-licensed individuals and corporations. The media is flooded with academic studies purporting to show that illicit speech instruments cause crime and violence, though for some reason the researchers making these claims often refuse to publish their primary data sets.
In unguarded moments and friendly company the speech-control movement’s leadership describes its goal expansively as confiscation and bans on all speech instruments not under direct government control or licensing. For public consumption, however, they speak only of “common-sense regulation” – conveniently never quite achieved, and always requiring more restrictions designed to increase the costs and legal risks for individuals owning speech instruments.
Free-speech advocates begin referring to the speech-control movement’s tactics as “salami-slicing” – carving away rights one “reasonable” slice at a time until there is nothing left. Document leaks from major speech-control lobbying organizations confirm that this is their strategy (they call it “incrementalism”), and that they intend to continue lying about their objectives in public until the goal is so nearly achieved that admitting the truth will no longer prevent final victory.
But much of the general public, the American moderate middle, takes the speech-control movement’s public rhetoric at face value. Who can be against “reasonable restrictions” and “common-sense regulation”? Especially when pundits assure them that free speech was never intended by the framers of the Constitution to be interpreted as an individual right, but as a collective right of the people to be exercised only as members of government-controlled or sponsored corporate bodies.
But by 1990 many individual private owners of telephones and computers, though themselves still almost untouched by the new laws, are nevertheless becoming suspicious of the speech-control movement and increasingly frustrated with the NWA’s sluggish and inadequate counters to it. Awareness of the pattern of salami-slicing and strategic deception by the other side is spreading well beyond hard-core free-speech activists.
In 2001, an eminent historian named Prettyisland publishes a book entitled “Printing America”. In it, he argues that pre-Civil war Americans never placed the high value on free speech and freedom of expression asserted in popular history, and that ownership of speech instruments was actually rare in the Revolutionary period. He is awarded a Bancroft Prize; his book receives glowing reviews in academia and all media outlets and is taken up as a major propaganda cudgel by the speech-control movement.
Within 18 months dedicated free-speech activists led by an amateur scholar show that “Printing America” was a wholesale fraud. The probate records Prettyisland claims to have examined never existed. He has systematically misquoted and distorted his sources. Shamefaced academics recant their support; his Bancroft Prize is revoked.
The speech-control movement takes a major loss in its credibility, and free speech activists a corresponding gain. Free-speech advocacy organizations more willing to confront their enemies than the NWA arise, and find increasing grassroots support – Printer Owners of America, Advocates for the First Amendment, Jews for the Preservation of Computer Ownership.
The members of these organizations know that many people advocating “reasonable restrictions” and advocating “common-sense regulation” are not actually seeking total bans and confiscation. They’re honest dupes, believing ridiculous collective-rights theories because that’s what all the eminent people who gave Prettyisland’s book glowing reviews told them was true. They honestly believe that anyone who doesn’t support “common-sense regulation” is a dangerous, out-of-touch radical.
Free-speech advocates also know that some people speaking the same moderate-sounding language – including most of the leadership of the speech-control movement – are lying, and are using the people in the first group as cat’s paws for an agenda that can only honestly be described as the totalitarian suppression of free speech.
Increasingly, the difference between these groups becomes irrelevant. What has happened is that four decades of strategic deception by the leadership of the speech-control movement has destroyed the credibility of the honest middle. Free-speech activists, unable to read minds, have to assume defensively that everyone using the moderate-middle language of “common-sense regulation” is lying to hide a creeping totalitarian agenda.
The moderate middle, unaware of how it has been used, doesn’t get any of this. All they hear is the yelling. They don’t understand why the free-speech activists react to their reasonable language with hatred and loathing.
The preceding was a work of fiction. But I’d only have to change a dozen or so nouns and names and phrases to make it all true (some of the dates might be off a little). I bet you can break the code, and if you are “moderate” you may find it explains a few things. Have fun!
I’ve been thinking about how to build a better IRC client recently.
The proximate cause is that I switched to irssi from chatzilla recently. In most ways it’s better, but it has some annoying UI quirks. Thinking they’d be easy to fix, I dug into the codebase and discovered that it’s a nasty hairball. We’ve seen projects where a great deal of enthusiasm and overengineering resulted in code that is full of locally-clever bits but excessively difficult to modify or maintain as a whole; irssi is one of those. Even its maintainers have mostly abandoned it; there hasn’t neen an actual release since 2010.
This is a shame, because despite its quirks it’s probably the best client going for serious IRC users. I say this because I’ve tried the other major contenders (chatzilla, BitchX, XChat, ircii) in the past. None of them really match irsii’s feature set, which makes it particularly unfortunate that the codebase resembles a rubble pile.
I’m nor capable of stumbling over a situation like this without thinking about how to fix it. And yesterday…I had an insight.
Probably the single most annoying thing about today’s IRC clients is that if you don’t leave them on all the time you miss some of the channel traffic. There’s no way to join a favorite channel and look at the traffic for the last half hour to get context that happened while you were gone, other than leaving your client actively watching it. And sometimes you don’t want a client distracting you with chat and urgent notifications.
So, I thought, OK, what if I built a client that logs all your IRC traffic for you? You’d still have the dropout problem, but at least it could always use the log to show you the last part of the conversation you were actually present for. Hm…but what about when you weren’t there?
That’s when I got it. I realized that because people think of IRC clients as ways to watch network traffic, they build them all wrong. Here’s how to do it right…
First, build a little client daemon whose job it is to watch channels for you and log their traffic, aggregating it into a message timeline that’s stored as a logfile on disk. The daemon gets started if it’s not already running, whenever you fire up your client. But exiting the client doesn’t kill the daemon. If you really don’t want to miss anything, you launch the daemon from your login profile well before you start your client.
Your client is just a browser for the message timeline. It doesn’t actually talk to IRC servers because it no longer has to. When it wants to send traffic, or join a channel, or leave a channel, it ships a request to the daemon, which is managing all the actual server connections. The response gets appended to the message timeline just like every other traffic and is then visible by the client.
Then I realized…I’ve already written this daemon! Almost all of it, anyway. It’s irker, my replacement for the defunct CIA service. Add an option to log traffic. Add options to set your nick and its nickserv password. Done!
Those features are in the irker repo now. Not released yet because the code for nickserv authentication is untested, but that’s a detail. The point is that adding about 20 lines of trivial code has amped up irker so that it’s now a generic chat-logging back end that could be used by a whole family of IRC clients – every one of which could be functionally superior to what’s now out there.
To paraphrase XKCD: Code reuse. It works, bitches!
This is a postscript to my saga of the graphics-card disaster.
Thank you. everybody who occasionally drops money in my PayPal account. In the past it has bought test hardware for GPSD. This week I had enough in it to pay for the Radeon card, the one that actually works.
Your donations help me maintain software that serves a billion people every day. Thank you again.
I am composing this blog entry on the right-hand screen of a brand shiny new dual-monitor rig. That took me the best part of a week to get working. I am going to describe what I went through to get here because I think it contains some useful tips and cautions for the unwary.
I started thinking seriously about upgrading to dual monitors when A&D regular Hedgemage turned me on to the i3 tiling window manager. The thing I like about tiling window managers is that your screen is nearly all working surface; this makes me a bit unlike many of their advocates, who seem more focused on interfaces that are entirely keyboard-driven and allow one to unplug one’s mouse. The thing I like about i3 is that it seems to be the best in class for UI polish and documentation. And one of the things told me was that i3 does multi-monitor very well.
So, Monday morning I went out and bought a twin of the Auria EQ276W flatscreen I already have. I like this display a lot; it’s bright, crisp. and high-contrast. HedgeMage had recommended a particular Radeon-7750-based card available from Newegg under the ungainly designation “VGA HIS|H775FS2G”, but I didn’t want to wait the two days for shipping so I asked the tech at my friendly local computer shop to recommend something. After googling for Linux compatibility I bought an nVidia GeForce GT640.
That was my first mistake. And my most severe. I’m going to explain how I screwed up so you won’t make the same error.
For years I’ve been listening to lots of people sing hosannahs about how much better the nVidia proprietary blobs are than their open-source competition – enough better that you shouldn’t really mind that they’re closed-source and taint your kernel. And so much easier to configure because of the nvidia-settings tool, and generally shiny.
So when the tech pushed an nVidia card at me and I had googled to find reports of Linux people using it, I thought “OK, how bad can it be?”. He didn’t have any ATI dual-head cards. I wanted instant gratification. I didn’t listen to the well-honed instincts that said “closed source – do not trust”, in part because I like to think of myself as a reasonable guy rather than an ideologue and closed-source graphics drivers are low on my harm scale. I took it.
Then I went home and descended into hell.
I’m still not certain I understand all the causal relationships among the symptoms I saw during the next three days. There is post and comments on G+ about these events; I won’t rehash them all here, but do look at the picture.
That bar-chart-like crud on the left-hand flatscreen? For a day and a half I thought it was the result of some sort of configuration error, a mode mismatch or something. It had appeared right after I installed the GT640. I mean immediately on first powerup.
Then, after giving up in the GT640, because nothing I could do would make it do anything with the second head but echo the first, I dropped my single-head card back in. And saw the same garbage.
From the timing, the least hypothesis is that the first time the GT640 powered up, it somehow trashed my left-hand flatscreen. How, I don’t know – overvoltage on some critical pin, maybe? Everything else, including my complete inability to get the setup to enter any dual-head mode over the next 36 hours no matter how ingeniously I poked at it with xrandr, follows logically. I should have smelled a bigger rat when I noticed that xrandr wasn’t reporting a 2650×1440 mode for one of the displays – I think after the left one got trashed it was reporting invalid EDID data.
But I kept assuming I was seeing a software-level problem that, given sufficient ingenuity, I could configure my way out of. Until I dropped back to my single-head card and still saw the garbage.
Should I also mention that the much-vaunted nvidia-settings utility was completely useless? It thought I wasn’t running the nVidia drivers and refused to do a damn thing. It has since been suggested that I wasn’t in fact running the nVidia drivers, but if that’s so it’s because nVidias own installation package didn’t push nouveau (the open-source driver) properly out of the way. Either way, nVidia FAIL.
So, I ordered the Radeon card off Newegg (paying $20 for next-day shipping), got my monitor exchanged, got a refund on the never-to-be-sufficiently-damned GT640, and waited.
The combination of an unfried monitor and a graphics card that isn’t an insidiously destructive hell-bitch worked much better. But it still took a little hackery to get things really working. The major problem was that the combined pixel size of the two 2560×1440 displays won’t fit in X’s default 2560×2560 virtual screen size; this configuration needs a 2650*2×1440*2 = 5120×1440 virtual screen.
OK, so three questions immediately occur. First, if X’s default virtual screen is going to be larger than 2560×1440, why is it not 2x that size already? It’s not like 2560×1440 displays are rare creatures any more.
Second, why doesn’t xrandr just set the virtual-screen size larger itself when it needs to? It’s not like computing a bounding box for the layout is actually difficult.
Second, if there’s some bizarre but valid reason for xrandr not to do this, why doesn’t it have an option to let you force the virtual-screen size?
But no. You have to edit your xorg.conf, or create a custom one, to up that size to the required value. Here’s what I ended up with:
# Config file for snark using a VGA HIS|H775FS2G and two Auria EQ276W # displays. # # Unless the virtual screen size is increased, X cannot map both # monitors onto screen 0. # # The card is dual-head. # DFP1 goes out the card's DVI jack, DFP2 out the HDMI jack. # Section "Screen" Identifier "Screen0" Device "Card0" SubSection "Display" Virtual 5120 1440 EndSubSection EndSection Section "Monitor" Identifier "Monitor0" EndSection Section "Monitor" Identifier "Monitor1" Option "RightOf" "Monitor0" EndSection Section "Device" Identifier "Card0" Option "Monitor-DFP2" "Monitor0" Option "Monitor-DFP1" "Monitor1" EndSection
That finally got things working the way I want them.
What are our lessons for today, class?
Here’s the big one: I will never again install an nVidia card unless forced at gunpoint, and if that happens I will find a way to make my assailant eat the fucking gun afterwards. I had lots better uses for 3.5 days than tearing my hair out over this.
When your instincts tell you not to trust closed source, pay attention. Even if it means you don’t get instant gratification.
While X is 10,000% percent more autoconfiguring than it used to be, it still has embarrassing gaps. The requirement that I manually adjust the virtual-screen size was stupid.
UPDATE: My friend Paula Matuszek rightly comments: “You missed a lesson: When you have a problem in a complex system, the first thing to do is check each component individually, in isolation from as much else as possible. Yes, even if they were working before.”
Now I must get back to doing real work.
Last night, in an IRC conversation with one of my regulars, we were discussing a project we’re both users of and I’m thinking about contributing to, and I found myself saying of the project lead “And he’s German. You know what that means?” In fact, my regular understood instantly, and this deflected us into a discussion of how national culture visibly affects hackers’ collaborative styles. We found that our observations matched quite closely.
Presented for your amusement: Three stereotypical hackers from three different countries, described relative to the American baseline.
The German: Methodical, good at details, prone to over-engineering things, careful about tests. Territorial: as a project lead, can get mightily offended if you propose to mess with his orderly orderliness. Good at planned architecture too, but doesn’t deal with novelty well and is easily disoriented by rapidly changing requirements. Rude when cornered. Often wants to run things; just as often it’s unwise to let him.
The Indian: Eager, cooperative, polite, verbally fluent, quick on the uptake, very willing to adopt new methods, excessively deferential to anyone perceived as an authority figure. Hard-working, but unwilling to push boundaries in code or elsewhere; often lacks the courage to think architecturally. Even very senior and capable Indian hackers can often seem like juniors because they’re constantly approval-seeking.
The Russian: A morose, wizardly loner. Capable of pulling amazing feats of algorithmic complexity and how-did-he-spot that debugging out of nowhere. Mathematically literate. Uncommunicative and relatively poor at cooperating with others, but more from obliviousness than obnoxiousness. Has recent war stories about using equipment that has been obsolete in the West for decades.
Like most stereotypes, these should neither be taken too literally nor dismissed out of hand. It’s not difficult to spot connections to other aspects of the relevant national cultures.
A curious and interesting thing is that we were unable to identify any other national styles. Hackers from other Anglophone countries seem indistinguishable from Americans except by their typing accents. There doesn’t seem to be a characteristic French or Spanish or Italian style, or possibly it’s just that we don’t have a large enough sample to notice the patterns. From almost anywhere else outside Western Europe we certainly don’t.
Can anyone add another portrait to this gallery? It would be particularly interesting to me to find out what stereotypes hackers from other countries have about Americans.
If you read any amount of history, you will discover that people of various times and places have matter-of-factly believed things that today we find incredible (in the original sense of “not credible”). I have found, however, that one of the most interesting questions one can ask is “What if it really was like that?”
That is, what if our ancestors weren’t entirely lying or fantasizing when they believed in…say…the existence of vampires? If you’re willing to ask this question with an open mind, you might discover that there is a rare genetic defect called “erythropoietic porphyrinuria” that can mimic some of the classical stigmata of vampirism. Victims’ gums may be drawn back on the teeth, making said teeth appear fanglike; they are likely to be photophobic, shunning bright light; and, being anemic, they may develop a craving for blood…
I think the book that taught me to ask “What if it really was like that?” systematically might have been Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind. Jaynes observed that Bronze Age literary sources take for granted the routine presence of god-voices in peoples’ heads. Instead of dismissing this as fantasy, he developed a theory that until around 1000BC it really was like that – humans had a bicameral consciousness in which one chamber or operating subsystem, programmed by culture, manifested to the other as the voice of God or some dominant authority figure (“my ka is the ka of the king”). Jaynes’s ideas were long dismissed as brilliant but speculative and untestable; however, some of his predictions are now being borne out by neuroimaging techniques not available when he was writing.
A recent coment on this blog pointed out that many cultures – including our own until around the time of the Industrial Revolution – constructed many of their customs around the belief that women are nigh-uncontrollably lustful creatures whose sexuality has to be restrained by strict social controls and even the amputation of the clitoris (still routine in large parts of the Islamic world). Of course today our reflex is to dismiss this as pure fantasy with no other function than keeping half the human species in perpetual subjection. But some years ago I found myself asking “What if it really was like that?”
Let’s be explicit about the underlying assumptions here and their consequences. It used to be believed (and still is over much of the planet) that a woman in her fertile period left alone with any remotely presentable man not a close relative would probably (as my commenter put it) be banging him like a barn door in five minutes. Thus, as one conseqence, the extremely high value traditionally placed on physical evidence of virginity at time of marriage.
Could it really have been like that? Could it still be like that in the Islamic world and elsewhere today? One reason I think this question demands some attention is that the costs of the customs required to restrain female sexuality under this model are quite high on many levels. At minimum you have to prevent sex mixing, which is not merely unpleasant for both men and women but requires everybody to invest lots of effort in the system of control (wives and daughters cannot travel or in extreme cases even go outside without male escort, homes have to be built with zenanahs). At the extreme you find yourself mutilating the genitalia of your own daughters as they scream under the knife.
I don’t think customs that expensive can stay in force without solid reason. And it’s not sufficient to fall back on feminist cant and say the men are doing it to oppress the women, as if desire to oppress were a primary motive that doesn’t require explanation. For one thing, in such cultures women (especially older women out of their fertile period) are always key figures in the control system. It couldn’t function without them being ready to take a hard line against sexual “impurity” – often, a harder line than men do.
And, in fact, a large body of historical evidence suggests that it is possible to train most women to be uncontrollably lustful with strange men. All you have to do is limit their sexual opportunities enough, as in a system of purdah or strict gender segregation that almost totally prevents close contact with males other than close relatives.
What I’m suggesting is that the they’ll-fling-themselves-at-any-male model of female behavior believed by strict patriarchal societies is actually a self-fulfilling prophecy – that is, if your society begins to evolve towards purdah, women (who have only a limited fertile period) adapt by becoming more sexually aggressive. This in turn motivates stricter customs.
The effect is a vicious circle. At the extreme, the societies in which everyone expects women to bang strangers on five minutes’ notice find they elicit exactly that behavior with the methods they employ to suppress it. Well, except for clitoridectomy; that probably works, being your last resort when you’ve noticed that social repression is making your fertile women ever more uncontrollable when they can get at men.
We can find some support for this theory even in present time. I’ve noted before that in our modern, liberated era women seem not to be demanding as high a clearing price for sex as they should. In traditional terms, they’re being lustful. And this is in a culture that probably encourages sex mixing as much or more than any in history, driving the opportunity cost associated with not randomly humping strangers to an unprecedented low.
I’m not writing to suggest any particular thing we should do about this. What I’m encouraging is a variant of the exercise I’ve previously called “killing the Buddha”. Sometimes the consequences of supposing that our ancestors reported their experience of the world faithfully, and that their customs were rational adaptations to that experience, lead us to conclusions we find preposterous or uncomfortable. I think that the more uncomfortable we get, the more important it becomes to ask ourselves “What if it really was like that?”
I’ve been experimenting with tiling window managers recently. I tried out awesome and xmonad, and read documentation on several others including dwm and wmii. The prompt cause is that I’ve been doing a lot of surgery on large repositories recently, and when you get up to 50K commits that’s enough to create serious memory pressure on my 4G of core (don’t laugh, I tend to drive my old hardware until the bolts fall out). A smaller, lighter window manager can actually make a difference in performance.
More generally, I think the people advocating these have some good UI arguments – OK, maybe only when addressing hard-core hackers, but hey we’re users too. Ditching the overhead of frobbing window sizes and decorations in favor of getting actual work done is a kind of austerity I can get behind. My normal work layout consisted of just three big windows that nearly filled the screen anyway – terminal, Emacs and browser. Why not cut out the surrounding cruft?
I wasn’t able to settle on a tiling wm that really satisfied, though, until my friend HedgeMage pointed me at i3. After a day or so of using it I suspect I’ll be sticking with it. The differences from other tiling wms are not major but it seems just enough better designed and documented to cross a threshold for me, from interesting novelty to useful tool. Along with this change I’m ditching Chatzilla for irsii; my biggest configuration challenge in the new setup, actually, was teaching irssi how to use libnotify so I get visible IRC activity cues even when irsii itself is hidden.
One side effect of i3 is that I think it increases the expected utiliity of a multi-monitor configuration enough to actually make me shell out for a dual-head card and another flatscreen – the documentation suggests (and HedgeMage confirms) that i3 workspace-to-display mapping works naturally and well. The auxiliary screen will be all browser, all the time, leaving the main display for editing and shell windows.
It’s not quite a perfect fit. The i3 model of new-window layout is based on either horizontally or vertically splitting parent windows into equal parts. While this produces visually elegant layouts, for some applications I’d like it to try harder to split space so that the new application gets its preferred size rather than half the parent. In particular I want my terminal emulators and Emacs windows to be exactly 80 columns unless I explicitly resize them. I’ve proposed some rules for this on the i3 development list and may try to implement them in the i3 codebase.
I’m not quite used to the look yet. On the one hand, seeing almost all graphics banished from my screen in favor of fixed-width text still seems weirdly retro, almost as though it were a reversion to the green screens of my youth. On the other hand, we sure didn’t have graphical browsers in another window then. And the effect of the whole is … clean, is the best way I can put it. Elegant. Uncluttered. I like that.
Even old Unix hands like me take the Windows-Icons-Mouse-Pointer style of interface for granted nowadays, but i3 does fine without the I in WIMP. This makes me wonder how much of the rest of the WIMPiness of our interfaces is a mistake, an overelaboration, a local peak in design space rather than a global one.
I was willing enough to defend the CLI for expert users in The Art of Unix Programming, and I’ve put my practice where my theory is in designing tools like reposurgeon. Now I wonder if I should have been still more of an – um – iconoclast.
Today, while doing research to answer some bug mail, I learned that all versions of Android since 4.0 (Ice Cream Sandwich) have used gpsd to read the take from the onboard GPS. Sadly, gpsd is getting blamed in some quarters for excessive battery drain. But it’s not gpsd’s fault! Here is what’s actually going on.
Activating the onboard GPS in your phone eats power. Normally, Android economizes by computing a rough location from signal-tower strengths, information it gathers anyway in normal operation. To get a more precise fix, an app (such as Google Maps) can request “Detailed Location”. This is what is happening when the GPS icon appears on your status bar.
Requesting “Detailed Location” wakes up gpsd, causing it to power up the on-board GPS and begin interpreting the NMEA data stream it ships. Somewhere in Android’s Java code (I don’t know the details), the reports from gpsd are captured and made available to the Java API that apps can see. Normally this mode is a little expensive, mainly because of the power cost of running the GPS hardware; this is why Android doesn’t keep the GPS powered up all the time. Normally the gpsd demon itself is very economical; we’ve measured its processor utilization on low-power ARM chips and it’s below the noise floor of the process monitor. As it should be; the data rate from a GPS isn’t very high, there’s simply no reason for gpsd to spend a lot of cycles.
Nevertheless, instances of excessive battery drain have been reported with the system monitor fingering gpsd as the culprit, especially on the Samsung Galaxy SIII. In some cases this happens when the onboard GPS is powered off. In every case I’ve found through Googling for “Android gpsd”, the actual bad guy is an app that is both requesting Detailed Location and running in background; if you deinstall the app, the battery drain goes away. (On the Galaxy SIII, the ‘app’ may actually be the “Remote Location Service” in the vendor firmware; you can’t remove it, but you can disable it through Settings.)
I suspect that there’s something else going on here. The fact that gpsd is reported to be processor-hogging when the GPS is powered off suggests that it’s spinning on its main select(2) call. We’ve occasionally seen behavior like this before, and it has always been down to some bug or misconfiguration in the Linux kernel’s serial I/O layer (gpsd exercises that layer in some unusual ways). This is consistent with the relative rareness of the bug; likely it’s only happening on a couple of specific phone models. If every background app using the GPS caused this problem, I’d have had a mob of pitchfork-wielding peasants at my castle door long since…
TL;DR: It’s not gpsd’s fault – find the buggy app and remove it.
All this having been said, why, yes I do think it’s seriously cool that gpsd is running in all newer Android phones. My code is ubiquitous and inescapable, bwahahahaha! But you knew that already.
One of my commenters recently speculated in an accusing tone that I might be a natural-rights libertarian. He was wrong, but explaining why is a good excuse for writing an essay I’ve been tooling up to do for a long time. For those of you who aren’t libertarians, this is not a parochial internal dispute – in fact, it cuts straight to the heart of some long-standing controversies about consequentialism versus deontic ethics. And if you don’t know what those terms mean, you’ll have a pretty good idea by the time you’re done reading.
There are two philosophical camps in modern libertarianism. What distinguishes them is how they ground the central axiom of libertarianism, the so-called “Non-Aggression Principle” or NAP. One of several equivalent formulations of NAP is: “Initiation of force is always wrong.” I’m not going to attempt to explain that axiom here or discuss various disputes over the NAP’s application; for this discussion it’s enough to note that libertarians take the NAP as a given unanimously enough to make it definitional. What separates the two camps I’m going to talk about is how they justify the NAP.
“Natural Rights” libertarians ground the NAP in some a priori belief about religion or natural law from which they believe they can derive it. Often they consider the “inalienable rights” language in the U.S.’s Declaration of Independence, abstractly connected to the clockmaker-God of the Deists, a model for their thinking.
“Utilitarians” justify the NAP by its consequences, usually the prevention of avoidable harm and pain and (at the extreme) megadeaths. Their starting position is at bottom the same as Sam Harris’s in The Moral Landscape; ethics exists to guide us to places in the moral landscape where total suffering is minimized, and ethical principles are justified post facto by their success at doing so. Their claim is that NAP is the greatest minimizer.
The philosophically literate will recognize this as a modern and specialized version of the dispute between deontic ethics and consequentialism. If you know the history of that one, you’ll be expecting all the accusations that fly back and forth. The utilitarians slap at the natural-rights people for handwaving and making circular arguments that ultimately reduce to “I believe it because $AUTHORITY told me so” or “I believe it because ya gotta believe in something“. The natural-rights people slap back by acidulously pointing out that their opponents are easy prey for utility monsters, or should (according to their own principles) be willing to sacrifice a single innocent child to bring about their perfected world.
My position is that both sides of this debate are badly screwed up, in different ways. Basically, all the accusations they’re flinging at each other are correct and (within the terms of their traditional debates and assumptions) unanswerable. We can get somewhere better, though, by using their objections to repair each other. Here’s what I think each side has to give up…
The natural-rightsers have to give up their hunger for a-priori moral certainty. There’s just no bottom to to that; it’s contingency all the way down. The utilitarians are right that every act is an ethical experiment – you don’t know “right” or “wrong” until the results come in, and sometimes the experiment takes a very long time to run. The parallel with epistemology, in which all non-consequentialist theories of truth collapse into vacuity or circularity, is exact.
The utilitarians, on the other hand, have to give up on their situationalism and their rejection of immutable rules as voodoo or hokum. What they’re missing is how the effects of payoff asymmetry, forecasting uncertainty, and decision costs change the logic of utility calculations. When the bad outcomes of an ethical decision can be on the scale of genocide, or even the torturing to death of a single innocent child, it is proper and necessary to have absolute rules to prevent these consequences – rules that that we treat as if they were natural laws or immutable axioms or even (bletch!) God-given commandments.
Let’s take as an example the No Torturing Innocent Children To Death rule. (I choose this, of course in reference to a famous critique of Benthamite utilitarianism.) Suppose someone were to say to me “Let A be the event of torturing an innocent child to death today. Let B be the condition that the world will be a paradise of bliss tomorrow. I propose to violate the NTICTD rule by performing A in order to bring about B”.
My response would be “You cannot possibly have enough knowledge about the conditional probability P(B|A) to justify this choice.” In the presence of epistemic uncertainty, absolute rules to bound losses are rational strategy. A different way to express this is within a Kripke-style possible-futures model: the rationally-expected consequences of allowing violations of the NTICTD rule are so bad over so many possible worlds that the probability of landing in a possible future where the violation led to an actual gain in utility is negligible.
My position is that the NAP is a necessary loss-bounding rule, like the NTICTD rule. Perhaps this will become clearer if we perform a Kantian on it into “You shall not construct a society in which the initiation of force is normal.” I hold that, after the Holocaust and the Gulag, you cannot possibly have enough certainty about good results from violating this rule to justify any policy other than treating the NAP as absolute. The experiment has been run already, it is all of human history, and the bodies burned at Belsen-Bergen and buried in the Katyn Wood are our answer.
So I don’t fit neatly in either camp, nor want to. On a purely ontological level I’m a utilitarian, because being anything else is incoherent and doomed. But I respect and use natural-rights language, because when that camp objects that the goals of ethics are best met with absolute rules against certain kinds of harmful behavior they’re right. There are too many monsters in the world, of utility and every other kind, for it to be otherwise.
This is how the AGW panic ends: not with a bang, but with a whimper.
The Economist, which (despite a recent decline) remains probably the best news magazine in the English language, now admits that (a) global average temperature has been flat for 15 years even as CO2 levels have been rising rapidly, (b) surface temperatures are at the lowest edge of the range predicted by IPCC climate models, (c) on current trends, they will soon fall clean outside and below the model predictions, (c) estimates of climate sensitivity need revising downwards, and (d) something, probably multiple things, is badly wrong with AGW climate models.
Do I get to say “I told you so!” yet?
The wheels are falling off the bandwagon. The Economist has so much prestige in the journalistic establishment that it’s going to become difficult now for the mainstream media to continue averting their eyes from the evidence. Honest AGW advocates have been the victims of a massive error cascade enlisted in aid of a vast and vicious series of political and financial scams; it’s time for them to wake up and realize they’ve been had, taken, swindled, conned, and used.
I can’t but think the record cold weather in England has got something to do with this. Only a few years ago AGW panicmongers were screaming that British children would never see another snowfall – now they’re struggling with nastier winter weather than has been seen in a century. Perhaps the big chill woke somebody at The Economist up?
And if you think I’m gloating now, wait until GAT actually falls far enough below the low end of IPCC projections that the Economist has to admit that. I plan to be unseemly and insufferable about it, oh yes I do.
I shipped reposurgeon 2.29 a few minutes ago. The main improvement in this version is speed – it now reads in and analyzes Subversion repositories at a clip of more than 11,000 commits per minute. This, is, in case you are in any doubt, ridiculously fast – faster than the native Subversion tools do it, and for certain far faster than any of the rival conversion utilities can manage. It’s well over an order of magnitude faster than when I began seriously tuning for speed three weeks ago. I’ve learned some interesting lessons along the way.
The impetus for this tune-up was the Battle for Wesnoth repository. The project’s senior devs finally decided to move from Subversion to git recently. I wan’t actively involved in the decision myself, since I’ve been semi-retired from Wesnoth for a while, but I supported it and was naturally the person they turned to to do the conversion. Doing surgical runs on that repository rubbed my nose in the fact that code with good enough performance on a repository 500 or 5000 commits long won’t necessarily cut it on a repository with over 56000 commits. Two-hour waits for the topological-analysis phase of each load to finish were kicking my ass – I decided that some serious optimization effort seemed like a far better idea than twiddling my thumbs.
First I’ll talk about some things that didn’t work.
pypy, which is alleged to use fancy JIT compilation techniques to speed up a lot of Python programs, failed miserably on this one. My pypy runs were 20%-30% slower than plain Python. The pypy site warns that pypy’s optimization methods can be defeated by tricky, complex code, and perhaps that accounts for it; reposurgeon is nothing if not algorithmically dense.
cython didn’t emulate pypy’s comic pratfall, but didn’t deliver any speed gains distinguishable from noise either. I wasn’t very surprised by this; what it can compile is mainly control structure. which I didn’t expect to be a substantial component of the runtime compared to (for example) string-bashing during stream-file parsing.
My grandest (and perhaps nuttiest) plan was to translate the program into a Lisp dialect with a decent compiler. Why Lisp? Well…I needed (a) a language with unlimited-extent types that (b) could be compiled to machine-code for speed, and (c) minimized the semantic distance from Python to ease translation (that last point is why you Haskell and ML fans should refrain from even drawing breath to ask your obvious question; instead, go read this). After some research I found Steel Bank Common Lisp (SBCL) and began reading up on what I’d need to do to translate Python to it.
The learning process was interesting. Lisp was my second language; I loved it and was already expert in it by 1980 well before I learned C. But since 1982 the only Lisp programs I’ve written have been Emacs modes. I’ve done a whole hell of a lot of those, including some of the most widely used ones like GDB and VC, but semantically Emacs Lisp is a sort of living fossil coelacanth from the 1970s, dynamic scoping and all. Common Lisp, and more generally the evolution of Lisp implementations with decent alien type bindings, passed me by. And by the time Lisp got good enough for standalone production use in modern environments I already had Python in hand.
So, for me, reading the SBCL and Common Lisp documentation was a strange mixture of learning a new language and returning to very old roots. Yay for lexical scoping! I recoded about 6% of reposurgeon in SBCL, then hit a couple of walls. Once of the lesser walls was a missing feature in Common Lisp corresponding to the __str__ special method in Python. Lisp types don’t know how to print themselves, and as it turns out reposurgeon relies on this capability in various and subtle ways. Another problem was that I couldn’t easily see how to duplicate Python’s subprocess-control interface – at all, let alone portably across common Lisp implementations.
But the big problem was CLOS, the Common Lisp Object System. I like most of the rest of Common Lisp now that I’ve studied it. OK, it’s a bit baroque and heavyweight and I can see where it’s had a couple of kitchen sinks pitched in – if I were choosing a language on purely esthetic grounds I’d prefer Scheme. But I could get comfortable with it, except for CLOS.
But me no buts about multimethods and the power of generics – I get that, OK? I see why it was done the way it was done, but the brute fact remains that CLOS is an ugly pile of ugly. More to the point in this particular context, CLOS objects are quite unlike Python objects (which are in many ways more like CL defstructs). It was the impedance mismatch between Python and CLOS objects that really sank my translation attempt, which I had originally hoped could be done without seriously messing with the architecture of the Python code. Alas, that was not to be. Which refocused me on algorithmic methods of improving the Python code.
Now I’ll talk about what did work.
What worked, ultimately, was finding operations that have instruction costs O(n**2) in the number of commits and squashing them. At this point a shout-out goes to Julien “FrnchFrgg” Rivaud, a very capable hacker trying to use reposurgeon for some work on the Blender repository. He got interested in the speed problem (the Blender repo is also quite large) and was substantially helpful with both patches and advice. Working together, we memoized some expensive operations and eliminated others, often by incrementally computing reverse-lookup pointers when linking objects together in order to avoid having to traverse the entire repository later on.
Even just finding all the O(n**2) operations isn’t necessarily easy in a language as terse and high-level as Python; they can hide in very innocuous-looking code and method calls. The biggest bad boy in this case turned out to be child-node computation. Fast import streams express “is a child of” directly; for obvious reasons, a repository analysis often has to look at all the children of a given parent. This operation blows up quite badly on very large repositories even if you memoize it; the only way to make it fast is to precompute all the reverse lookups and update them when you update the forward ones.
Another time sink (the last one to get solved) was identifying all tags and resets attached to a particular commit. The brute-force method (look through all tags for any with a from member matching the commit’s mark) is expensive mainly because to look through all tags you have to look through all the events in the stream – and that’s expensive when there are 56K of them. Again, the solution was to give each commit a list of back-pointers to the tags that reference it and make sure all the mutation operations update it properly.
It all came good in the end. In the last benchmarking run before I shipped 2.29 it processed 56424 commits in 303 seconds. That’s 186 commits per second, 11160 per minute. That’s good enough that I plan to lay off serious speed-tuning efforts; the gain probably wouldn’t be worth the increased code complexity.
UPDATE: A week later, after more speed-tuning mainly by Julien (because it was still slow on the very large repo he’s working with) analysis speed is up to 282 commits/sec (16920 per minute) and a curious thing has occurred. pypy now actually produces an actual speedup, up to around 338 commits/sec (20280 per minute). We don’t know why, but apparently the algorithmic optimizations somehow gave pypy’s JIT better traction. This is particularly odd because the density of the code actually increased.
In How crowdfunding and the JOBS Act will shape open source companies, Fred Trotter proposes that crowdfunding a la Kickstarter and IndieGoGo is going to displace venture capitalists as the normal engine of funding for open-source tech startups, and that this development will be a tremendous enabler. Trotter paints a rosy picture of idealistic geeks enabled to do fully open-source projects because they’ll no longer feel as pressed to offer a lucrative early exit to VCs on the promise of rent capture from proprietary technology.
Some of the early evidence from crowdfunding successes does seem to point at this kind of outcome, especially near 3D printing and consumer electronics with a lot of geek buy-in. And I’d love to believe all of Trotter’s optimism. But there’s a nagging problem of scale here that makes me think the actual consequences will be more mixed and messy than he suggests.
In general, VCs don’t want to talk to you at all unless they can see a good case for ploughing in at least $2 million, and they don’t get really interested below a scale of about $15M. This is because the amount of time required for them to babysit an investment (sit on the company’s board, assist job searches, etc.) doesn’t scale down for smaller investments – small plays are just as much work for much less money. This is why there’s a second class of investors, often called “angels”, who trade early financing on the $100K order of magnitude for equity. The normal trajectory of a startup goes from friends & family money through angels up to VCs. Each successive stage in this pipeline is generally placing a larger bet and accordingly has less risk tolerance and a higher time discount than the previous; VCs, in particular, will be looking for a fast cash-out via initial public offering.
The problem is this: it’s quite rare for crowdfunding to raise money even equivalent to the low-end threshold of a VC, let alone the volume they lay down when they’re willing to bet heavily. Unless crowdfunding becomes an order of magnitude more effective than it is now (which seems to me possible but unlikely) the financing source it will displace isn’t VCs but angels.
On the face of things, this would seem to sink Trotter’s optimism – if VCs don’t see any competition for investments in their preferred range there’s no obvious reason that VC pressure for proprietary rent-collection should decrease at all. But I think there will be significant second-order effects of the kind Trotter envisions via another route. That’s because crowdfunders are unlike angels in one very important respect: they’re not buying equity. Typically they’re contributing to buy an option on a product that can’t be built without startup capital. There’s no pressure on the company to produce a return to “investors” beyond that option, and in particular nobody pushing for a fast cash-out.
What this does is improve the attractiveness of a growth path that doesn’t pass through an IPO or the VCs at all. I think what we’ll see is a lot more startups crowdfunding to angel levels of capital investment, then avoiding the next round of financing in favor of more crowdfunders and endogenous growth. But think about this: how will the VCs adapt to this change in incentives?
They’ll still want to turn their ability to nurse early-stage companies into cash, but their power to set the term of that trade will be weakened precisely to the extent that crowdfunding makes the low-and-slow, no-IPO route more attractive. In another way, though, crowdfunders make a VC’s job easier. VCs can monitor the results of crowdfunding to measure the size and estimate the stickiness of the startup’s market, then see how effectively the startup executes on its promises. (You can bet that the smarter VCs are already doing this.)
Now look at the sum of these trends. If a startup has a successful crowdfunder, its bargaining power with the VCs increases in two ways. First, it’s going to be less desperate for capital than a company that can’t run out and do another crowdfunder for the next product. Second, the VC’s uncertainty about its ability to build and sell will be reduced. These changes will both increase the startup’s ability to bargain for doing things its way and reduce the VC’s pressure for an early IPO.
At the extreme, we might end up with a new normal in which VCs compete with each other to court startups that have done successful crowdfunders (“Hey! Think about what you could
do with fifteen megabucks and call us back!”), neatly inverting the present situation in which startups have to compete for the attention of VCs. That, of course, would be a situation in which open source wins huge.
It was inevitable, I suppose; reposurgeon now has its own Emacs mode.
The most laborious task in the reposurgeon conversion of a large CVS or Subversion repository is editing the comment history. You want to do this for two reasons: (1) to massage multiline comments into the summary-line + continuation form that plays well with git log and gitk, and (2) lifting Subversion and CVS commit references from, e.g., ’2345′ to [[SVN:2345]] so reposurgeon can recognize them unambiguously and turn them into action stamps.
In the new release 2.22, there’s a small Emacs mode with several functions that help semi-automate this process.
I terminated one of my open-source projects today. MIXAL is dead; it has been replaced by the GNU MIX Development Kit, alias MDK. Open-source projects die so seldom that the circumstances deserve a minor note.
I didn’t actually write MIXAL; somebody named ‘Darius Bacon’ (probably this guy) did it, under DOS. I stumbled across it in 1998, ported it to Unix, and fixed some minor bugs. Later, when I was in semi-regular contact with Don Knuth, he contributed two of his test programs and a text description of MIX from The Art of Computer Programming. Don gets open source; he was careful to arrange with his publisher terms that allow this material to be redistributed not just by me but by any project shipping under an open-source license.
I’m not sure when the MDK project started. When I first ran across it, it seemed to me to be not as capable as MIXAL; I made a note of it in my README file but did not consider simply handing off to it. That might have been as much a decade ago; when I re-encountered it recently, it looked a great deal more polished and mature. I, on the other hand, had barely touched MIXAL since I first ported it.
The world needs one competently-written MIX interpreter, but it doesn’t need two. So I looked up MDK’s maintainer and negotiated a handoff; he got the material Don Knuth donated to MIXAL, and I got to put MIXAL to a tidy end.
This what the open-source version of what musicologists call “folk process” looks like. Re-use, improve, contribute – and when someone else is clearly doing a better job, let go.