(Photo via Simon X)
My recent collision with Daniel Dennett on the topic of free will has caused me to reflect on how best to publicly resolve differences of opinion. In fact, this has been a recurring theme of late. In August, I launched the Moral Landscape Challenge, an essay contest in which I invited readers to attack my conception of moral truth. I received more than 400 entries, and I look forward to publishing the winning essay later this year. Not everyone gets the opportunity to put his views on the line like this, and it is an experience that I greatly value. I spend a lot of time trying to change people’s beliefs, but I’m also in the business of changing my own. And I don’t want to be wrong for a moment longer than I have to be.
In response to the Moral Landscape Challenge, the psychologist Jonathan Haidt issued a challenge of his own: He bet $10,000 that the winning essay will fail to persuade me. This wager seems in good fun, and I welcome it. But Haidt then justified his confidence by offering a pseudo-scientific appraisal of the limits of my intellectual honesty. He did this by conducting a keyword search of my books: The frequency of “certainty” terms, Haidt says, reveals that I (along with the other “New Atheists”) am even more blinkered by dogmatism and bias than Sean Hannity, Glenn Beck, and Anne Coulter. This charge might have been insulting if it weren’t so silly. It is almost impossible to believe that Haidt expects his “research” on this topic to be taken seriously. But apparently he does. In fact, he claims to be continuing it.
Consider the following two passages (keywords expressing “certainty” are in boldface):
It is obvious and undeniable—and must always be remembered—that the Bible was the product of merely human minds. As such, it cannot provide true answers to every factual question that will arise for us in the 21st century.
It is obvious and undeniable—and must always be remembered—that the Bible was the product of Divine Omniscience. As such, it provides true answers to every factual question that will arise for us in the 21st century.
According to Haidt’s methodology, these passages exhibit the same degree of dogmatism. I hope it won’t appear too expressive of certainty on my part to observe how terrifically stupid that conclusion is. If, as Haidt alleges, verbal reasoning is just a way for people to “guard their reputations,” one wonders why he doesn’t use it for that purpose.
Haidt is right to observe that anyone can be misled by his own biases. (He just has the unfortunate habit of writing as though no one else understands this.) I will also concede that I don’t tend to lack confidence in my published views (that is one of the reasons I publish them). After my announcement of the Moral Landscape Challenge, a few readers asked whether I’ve ever changed my mind about anything. I trust my wrangling with Dennett has only deepened this sense of my incorrigibility. Perhaps it is worth recalling more of Haidt’s adamantine wisdom on this point:
[T]he benefits of disconfirmation depend on social relationships. We engage with friends and colleagues, but we reject any critique from our enemies.
Well, then I must be a very hard case. I received a long and detailed criticism of my work from a friend, Dan Dennett, and found it totally unpersuasive. How closed must I be to the views of my enemies?
Enter Jeremy Scahill: I’ve never met Scahill, and I’m not aware of his having attacked me in print, so it might seem a little paranoid to categorize him as an “enemy.” But he recently partnered with Glenn Greenwald and Murtaza Hussain to launch The Intercept, a new website dedicated to “fearless, adversarial journalism.” Greenwald has worked very hard to make himself my enemy, and Hussain has worked harder still. Both men have shown themselves to be unprofessional and unscrupulous whenever their misrepresentations of my views have been pointed out. This is just to say that, while I don’t usually think of myself as having enemies, if I were going to pick someone to prove me wrong on an important topic, it probably wouldn’t be Jeremy Scahill. I am, in Haidt’s terms, highly motivated to reason in a “lawyerly” way so as not to give him the pleasure of changing my mind. But change it he has.
Generally, I have supported President Obama’s approach to waging our war against global jihadism, and I’ve always assumed that I would approve of his targets and methods were I privy to the same information he is. I’ve also said publicly, on more than one occasion, that I thought our actions should be mostly covert. So the president’s campaign of targeted assassination has had my full support, and I lost no sleep over the killing of Anwar al-Awlaki. To me, the fact that he was an American citizen was immaterial.
I have also been very slow to worry about NSA eavesdropping. My ugly encounters with Greenwald may have colored my perception of this important story—but I just don’t know what I think about Edward Snowden. Is he a traitor or a hero? It still seems too soon to say. I don’t know enough about the secrets he has leaked or the consequences of his leaking them to have an opinion on that question.
However, last night I watched Scahill’s Oscar-nominated documentary Dirty Wars—twice. The film isn’t perfect. Despite the gravity of its subject matter, there is something slight about it, and its narrow focus on Scahill seems strangely self-regarding. At moments, I was left wondering whether important facts were being left out. But my primary experience in watching this film was of having my settled views about U.S. foreign policy suddenly and uncomfortably shifted. As a result, I no longer think about the prospects of our fighting an ongoing war on terror in quite the same way. In particular, I no longer believe that a mostly covert war makes strategic or moral sense. Among the costs of our current approach are a total lack of accountability, abuse of the press, collusion with tyrants and warlords, a failure to enlist allies, and an ongoing commitment to secrecy and deception that is corrosive to our politics and to our standing abroad.
Any response to terrorism seems likely to kill and injure innocent people, and such collateral damage will always produce some number of future enemies. But Dirty Wars made me think that the consequences of producing such casualties covertly are probably far worse. This may not sound like a Road to Damascus conversion, but it is actually quite significant. My view of specific questions has changed—for instance, I now believe that the assassination of al-Awlaki set a very dangerous precedent—and my general sense of our actions abroad has grown conflicted. I do not doubt that we need to spy, maintain state secrets, and sometimes engage in covert operations, but I now believe that the world is paying an unacceptable price for the degree to which we are doing these things. The details of how we have been waging our war on terror are appalling, and Scahill’s film paints a picture of callousness and ineptitude that shocked me. Having seen it, I am embarrassed to have been so trusting and complacent with respect to my government’s use of force.
Clearly, this won’t be the last time I’ll be obliged to change my mind. In fact, I’m sure of it. Some things one just knows because they are altogether obvious—and, well, undeniable. At least, one always denies them at one’s peril. So I remain committed to discovering my own biases. And whether they are blatant, or merely implicit, I will work extremely hard to correct them. I’m also confident that if I don’t do this, my readers will inevitably notice. It’s necessary that I proceed under an assurance of my own fallibility—never infallibility!—because it has proven itself to be entirely accurate, again and again. I’m certain this would remain true were I to live forever. Some things are just guaranteed. I think that self-doubt is wholly appropriate—essential, frankly—whenever one attempts to think precisely and factually about anything—or, indeed, about everything. Being a renowned scientist, Jonathan Haidt must fundamentally agree. I urge him to complete his research on my dogmatism and cognitive closure at the soonest opportunity. The man has a gift—it is pure and distinct and positively beguiling. He mustn’t waste it.
- Haidt used the following keywords to conduct a searching, scientific analysis of “New Atheist” books: absolute, absolutely, accura*, all, altogether, always, apparent, assur*, blatant*, certain*, clear, clearly, commit, commitment*, commits, committ*, complete, completed, completely, completes, confidence, confident, confidently, correct*, defined, definite, definitely, definitive*, directly, distinct*, entire*, essential, ever, every, everybod*, everything*, evident*, exact*, explicit*, extremely, fact, facts, factual*, forever, frankly, fundamental, fundamentalis*, fundamentally, fundamentals, guarant*, implicit*, indeed, inevitab*, infallib*, invariab*, irrefu*, must, mustnt, must’nt, mustn’t, mustve, must’ve, necessar*, never, obvious*, perfect*, positiv*, precis*, proof, prove*, pure*, sure*, total, totally, true, truest, truly, truth*, unambigu*, undeniab*, undoubt*, unquestion*, wholly.↩
(Photo via Max Boschini)
I’d like to thank you for taking the time to review Free Will at such length. Publicly engaging me on this topic is certainly preferable to grumbling in private. Your writing is admirably clear, as always, which worries me in this case, because we appear to disagree about a great many things, including the very nature of our disagreement.
I want to begin by reminding our readers—and myself—that exchanges like this aren’t necessarily pointless. Perhaps you need no encouragement on that front, but I’m afraid I do. In recent years, I have spent so much time debating scientists, philosophers, and other scholars that I’ve begun to doubt whether any smart person retains the ability to change his mind. This is one of the great scandals of intellectual life: The virtues of rational discourse are everywhere espoused, and yet witnessing someone relinquish a cherished opinion in real time is about as common as seeing a supernova explode overhead. The perpetual stalemate one encounters in public debates is annoying because it is so clearly the product of motivated reasoning, self-deception, and other failures of rationality—and yet we’ve grown to expect it on every topic, no matter how intelligent and well-intentioned the participants. I hope you and I don’t give our readers further cause for cynicism on this front.
Unfortunately, your review of my book doesn’t offer many reasons for optimism. It is a strange document—avuncular in places, but more generally sneering. I think it fair to say that one could watch an entire season of Downton Abbey on Ritalin and not detect a finer note of condescension than you manage for twenty pages running.
I am not being disingenuous when I say this museum of mistakes is valuable; I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves. I have always suspected that many who hold this hard determinist view are making these mistakes, but we mustn’t put words in people’s mouths, and now Harris has done us a great service by articulating the points explicitly, and the chorus of approval he has received from scientists goes a long way to confirming that they have been making these mistakes all along. Wolfgang Pauli’s famous dismissal of another physicist’s work as “not even wrong” reminds us of the value of crystallizing an ambient cloud of hunches into something that can be shown to be wrong. Correcting widespread misunderstanding is usually the work of many hands, and Harris has made a significant contribution.
I hope you will recognize that your beloved Rapoport’s rules have failed you here. If you have decided, according to the rule, to first mention something positive about the target of your criticism, it will not do to say that you admire him for the enormity of his errors and the folly with which he clings to them despite the sterling example you’ve set in your own work. Yes, you may assert, “I am not being disingenuous when I say this museum of mistakes is valuable,” but you are, in truth, being disingenuous. If that isn’t clear, permit me to spell it out just this once: You are asking the word “valuable” to pass as a token of praise, however faint. But according to you, my book is “valuable” for reasons that I should find embarrassing. If I valued it as you do, I should rue the day I wrote it (as you would, had you brought such “value” into the world). And it would be disingenuous of me not to notice how your prickliness and preening appears: You write as one protecting his academic turf. Behind and between almost every word of your essay—like some toxic background radiation—one detects an explosion of professorial vanity.
And yet many readers, along with several of our friends and colleagues, have praised us for airing our differences in so civil a fashion—the implication being that religious demagogues would have declared mutual fatwas and shed each other’s blood. Well, that is a pretty low bar, and I don’t think we should be congratulated for having cleared it. The truth is that you and I could have done a much better job—and produced something well worth reading—had we explored the topic of free will in a proper conversation. Whether we called it a “conversation” or a “debate” would have been immaterial. And, as you know, I urged you to engage me that way on multiple occasions and up to the eleventh hour. But you insisted upon writing your review. Perhaps you thought that I was hoping to spare myself a proper defenestration. Not so. I was hoping to spare our readers a feeling of boredom that surpasseth all understanding.
As I expected, our exchange will now be far less interesting or useful than a conversation/debate would have been. Trading 10,000-word essays is simply not the best way to get to the bottom of things. If I attempt to correct every faulty inference and misrepresentation in your review, the result will be deadly to read. Nor will you be able to correct my missteps, as you could have if we were exchanging 500-word volleys. I could heap misconception upon irrelevancy for pages—as you have done—and there would be no way to stop me. In the end, our readers will be left to reconcile a book-length catalogue of discrepancies.
Let me give you an example, just to illustrate how tedious it is to untie these knots. You quote me as saying:
If determinism is true, the future is set—and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens. There is no combination of these truths that seems compatible with the popular notion of free will.
You then announce that “the sentence about indeterminism is false”—a point you seek to prove by recourse to an old thought experiment involving a “space pirate” and a machine that amplifies quantum indeterminacy. After which, you lovingly inscribe the following epitaph onto my gravestone:
These are not new ideas. For instance I have defended them explicitly in 1978, 1984, and 2003. I wish Harris had noticed that he contradicts them here, and I’m curious to learn how he proposes to counter my arguments.
You see, dear reader, Harris hasn’t done his homework. What a pity…. But you have simply misread me, Dan—and that entire page in your review was a useless digression. I am not saying that the mere addition of indeterminism to the clockwork makes responsibility impossible. I am saying, as you have always conceded, that seeking to ground free will in indeterminism is hopeless, because truly random processes are precisely those for which we can take no responsibility. Yes, we might still express our beliefs and opinions while being gently buffeted by random events (as you show in your thought experiment), but if our beliefs and opinions were themselves randomly generated, this would offer no basis for human responsibility (much less free will). Bored yet?
You do this again and again in your review. And when you are not misreading me, you construct bad analogies—to sunsets, color vision, automobiles—none of which accomplish their intended purpose. Some are simply faulty (that is, they don’t run through); others make my point for me, demonstrating that you have missed my point (or, somehow, your own). Consider what you say about sunsets to show that free will should not be considered an illusion:
After all, most people used to believe the sun went around the earth. They were wrong, and it took some heavy lifting to convince them of this. Maybe this factoid is a reflection on how much work science and philosophy still have to do to give everyday laypeople a sound concept of free will…. When we found out that the sun does not revolve around the earth, we didn’t then insist that there is no such thing as the sun (because what the folk mean by “sun” is “that bright thing that goes around the earth”). Now that we understand what sunsets are, we don’t call them illusions. They are real phenomena that can mislead the naive.
Of course, the sun isn’t an illusion, but geocentrism is. Our native sense that the sun revolves around a stationary Earth is simply mistaken. And any “project of sympathetic reconstruction” (your compatibilism) with regard to this illusion would be just a failure to speak plainly about the facts. I have never disputed that mental phenomena such as thoughts, efforts, volition, reasoning, and so forth exist. These are the many “suns” of the mind that any scientific theory must conserve (modulo some clarifying surprises, as has happened for the concept of “memory”). But free will is like the geocentric illusion: It is the very thing that gets obliterated once we begin speaking in detail about the origins of our thoughts and actions. You’re not just begging the question here; you’re begging it with a sloppy analogy. The same holds for your reference to color vision (which I discussed in a previous essay).
And when you are not causing problems with your own analogies, you are distorting mine. For instance, you write that you were especially dismayed by the cover of my book, which depicts a puppet theater. This cover image is justified because I argue that each of us is moved by chance and necessity, just as a marionette is set dancing on its strings. But I never suggest that this is the same as being manipulated by a human puppeteer who overrides our actual beliefs and desires and obliges us to behave in ways we do not intend. You seem eager to draw this implication, however, and so you press on with an irrelevant discussion of game theory (another area in which you allege I haven’t done my homework). Again, I am left wishing we had had a conversation that would have prevented so many pedantic digressions.
In any case, I cannot bear to write a long essay that consists in my repeatedly taking your foot out of my mouth. Instead, I will do my best to drive to the core of our disagreement.
Let’s begin by noticing a few things we actually agree about: We agree that human thought and behavior are determined by prior states of the universe and its laws—and that any contributions of indeterminism are completely irrelevant to the question of free will. We also agree that our thoughts and actions in the present influence how we think and act in the future. We both acknowledge that people can change, acquire skills, and become better equipped to get what they want out of life. We know that there is a difference between a morally healthy person and a psychopath, as well as between one who is motivated and disciplined, and thus able to accomplish his aims, and one who suffers a terminal case of apathy or weakness of will. We both understand that planning and reasoning guide human behavior in innumerable ways and that an ability to follow plans and to be responsive to reasons is part of what makes us human. We agree about so many things, in fact, that at one point you brand me “a compatibilist in everything but name.” Of course, you can’t really mean this, because you go on to write as though I were oblivious to most of what human beings manage to accomplish. At some points you say that I’ve thrown the baby out with the bath; at others you merely complain that I won’t call this baby by the right name (“free will”). Which is it?
However, it seems to me that we do diverge at two points:
1. You think that compatibilists like yourself have purified the concept of free will by “deliberately using cleaned-up, demystified substitutes for the folk concepts.” I believe that you have changed the subject and are now ignoring the very phenomenon we should be talking about—the common, felt sense that I/he/she/you could have done otherwise (generally known as “libertarian” or “contra-causal” free will), with all its moral implications. The legitimacy of your attempting to make free will “presentable” by performing conceptual surgery on it is our main point of contention. Whether or not I can convince you of the speciousness of the compatibilist project, I hope we can agree in the abstract that there is a difference between thinking more clearly about a phenomenon and (wittingly or unwittingly) thinking about something else. I intend to show that you are doing the latter.
2. You believe that determinism at the microscopic level (as in the case of Austin’s missing his putt) is irrelevant to the question of human freedom and responsibility. I agree that it is irrelevant for many things we care about (it doesn’t obviate the distinction between voluntary and involuntary behavior, for instance), but it isn’t irrelevant in the way you suggest. And accepting incompatibilism has important intellectual and moral consequences that you ignore—the most important being, in my view, that it renders hatred patently irrational (while leaving love unscathed). If one is concerned about the consequences of maintaining a philosophical position, as I know you are, helping to close the door on human hatred seems far more beneficial than merely tinkering with a popular illusion.
Changing the Subject
We both know that the libertarian notion of free will makes no scientific or logical sense; you just doubt whether it is widespread among the folk—or you hope that it isn’t, or don’t much care whether it is (in truth, you are not very clear on this point). In defense of your insouciance, you cite a paper by Nahmias et al. It probably won’t surprise you that I wasn’t as impressed by this research as you were. Nahmias and his coauthors repeatedly worry that their experimental subjects didn’t really understand the implications of determinism—and on my reading, they had good reason to be concerned. In fact, this is one of those rare papers in which the perfunctory doubts the authors raise, simply to show that they have thought of everything, turn out to be far more compelling than their own interpretations of their data. More than anything, this research suggests that people find the idea of libertarian free will so intuitively compelling that it is very difficult to get them to think clearly about determinism. Of course, I agree that what people think and feel is an empirical question. But it seems to me that we know much more about the popular view of free will than you and Nahmias let on.
It is worth noting that the most common objection I’ve heard to my position on free will is some version of the following:
If there is no free will, why write books or try to convince anyone of anything? People will believe whatever they believe. They have no choice! Your position on free will is, therefore, self-refuting. The fact that you are trying to convince people of the truth of your argument proves that you think they have the very freedom that you deny them.
Granted, some confusion between determinism and fatalism (which you and I have both warned against) is probably operating here, but comments of this kind also suggest that people think they have control over what they believe, as if the experience of being convinced by evidence and argument were voluntary. Perhaps such people also believe that they have decided to obey the law of gravity rather than fly around at their pleasure—but I doubt it. An illusion about mental freedom seems to be very widespread. My argument is that such freedom is incompatible with any form of causation (deterministic or otherwise)—which, as you know, is not a novel view. But I also argue that it is incompatible with the actual character of our subjective experience. That is why I say that the illusion of free will is itself an illusion—which is another way of saying that if one really pays attention (and this is difficult), the illusion of free will disappears.
The popular, folk psychological sense of free will is a moment-to-moment experience, not a theory about the mind. It is primarily a first-person fact, not a third-person account of how human beings function. This distinction between first-person and third-person views was what I was trying to get at in the passage that seems to have mystified you (“I have thought long and hard about this passage, and I am still not sure I understand it…”) Everyone has a third-person picture of the human mind—some of us speak of neural circuits, some speak of souls—but the philosophical problem of free will arises from the fact that most people feel that they author their own thoughts and actions. It is very difficult to say what this feeling consists of or to untangle it from working memory, volition, motor planning, and the rest of what our minds are up to—but there can be little doubt that most people feel that they are the conscious source of their own thoughts and actions. Of course, you may wish to deny this very assertion, or believe it more parsimonious to say that we just don’t know how most people feel—and that might be a point worth discussing. But rather than deny the claim, you simply lose sight of it—shifting from first-person experiences to third-person accounts of phenomena that lie outside consciousness.
It is true, of course, that most educated people believe the whole brain is involved in making them who they are (indeed, you and I both believe this). But they experience only some of what goes on inside their brains. Contrary to what you suggest, I was not advancing a Cartesian theory of consciousness (a third-person view), or any “daft doctrine of [my] own devising.” I was simply drawing a line between what people experience and what they don’t (first-person). The moment you show that a person’s thoughts and actions were determined by events that he did not and could not see, feel, or anticipate, his third-person account of himself may remain unchanged (“Of course, I know that much of what goes on in my brain is unconscious, determined by genes, and so forth. So what?”), but his first-person sense of autonomy comes under immediate pressure—provided he is paying attention. As a matter of experience (first-person), there is a difference between being conscious of something and not being conscious of it. And if what a person is unconscious of are the antecedent causes of everything he thinks and does, this fact makes a mockery of the subjective freedom he feels he has. It is not enough at that point for him to simply declare theoretically (third-person) that these antecedent causes are “also me.”
Average Joe feels that he has free will (first-person) and doesn’t like to be told that it is an illusion. I say it is: Consider all the roots of your behavior that you cannot see or feel (first-person), cannot control (first-person), and did not summon into existence (first-person). You say: Nonsense! Average Joe contains all these causes. He is his genes and neurons too (third-person). This is where you put the rabbit in the hat.
Imagine that we live in a world where more or less everyone believes in the lost kingdom of Atlantis. You and your fellow compatibilists come along and offer comfort: Atlantis is real, you say. It is, in fact, the island of Sicily. You then go on to argue that Sicily answers to most of the claims people through the ages have made about Atlantis. Of course, not every popular notion survives this translation, because some beliefs about Atlantis are quite crazy, but those that really matter—or should matter, on your account—are easily mapped onto what is, in fact, the largest island in the Mediterranean. Your work is done, and now you insist that we spend the rest of our time and energy investigating the wonders of Sicily.
The truth, however, is that much of what causes people to be so enamored of Atlantis—in particular, the idea that an advanced civilization disappeared underwater—can’t be squared with our understanding of Sicily or any other spot on earth. So people are confused, and I believe that their confusion has very real consequences. But you rarely acknowledge the ways in which Sicily isn’t like Atlantis, and you don’t appear interested when those differences become morally salient. This is what strikes me as wrongheaded about your approach to free will.
For instance, ordinary people want to feel philosophically justified in hating evildoers and viewing them as the ultimate cause of their evil. This moral attitude is always vulnerable to our getting more information about causation—and in situations where the underlying causes of a person’s behavior become too clear, our feelings about their responsibility begin to shift. This is why I wrote that fully understanding the brain of a normal person would be analogous to finding an exculpatory tumor in it. I am not claiming that there is no difference between a normal person and one with impaired self-control. The former will be responsive to certain incentives and punishments, and the latter won’t. (And that is all the justification we need to resort to carrots and sticks or to lock incorrigibly dangerous people away forever.) But something in our moral attitude does change when we catch sight of these antecedent causes—and it should change. We should admit that a person is unlucky to be given the genes and life experience that doom him to psychopathy. Again, that doesn’t mean we can’t lock him up. But hating him is not rational, given a complete understanding of how he came to be who he is. Natural, yes; rational, no. Feeling compassion for him could be rational, however—in the same way that we could feel compassion for the six-year-old boy who was destined to become Jeffrey Dahmer. And while you scoff at “medicalizing” human evil, a complete understanding of the brain would do just that. Punishment is an extraordinarily blunt instrument. We need it because we understand so little about the brain, and our ability to influence it is limited. However, imagine that two hundred years in the future we really know what makes a person tick; Procrustean punishments won’t make practical sense, and they won’t make moral sense either. But you seem committed to the idea that certain people might actually deserve to be punished—if not precisely for the reasons that common folk imagine, nevertheless for reasons that have little or nothing to do with the good consequences that such punishments might have, all things considered. In other words, your compatibilism seems an attempt to justify the conventional notion of blame, which my view denies. This is a difference worth focusing on.
Let’s examine Austin’s example of his missed putt:
Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. Nor does ‘I can hole it this time’ mean that I shall hole it this time if I try or if anything else; for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time, although I did not. (J.L. Austin. 1961. “Ifs and Cans,” in Austin, Philosophical Papers, edited by J. Urmson and G. Warnock. Oxford, Clarendon Press.)
This is a good place to start, because you say the following in your review:
I consider Austin’s mistake to be the central core of the ongoing confusion about free will; if you look at the large and intricate philosophical literature about incompatibilism, you will see that just about everyone assumes, without argument, that it is not a mistake.
I am happy to take the bait. I see no problem with using Austin’s example to support incompatibilism. I should emphasize, however, that I am discussing only the implications of Austin’s point for an account of free will, not how it functions in his original essay (which, as you know, was an analysis of the relationship between the terms “if” and “can,” not a sustained argument against free will).
Let’s make sure you and I are standing on the same green: We agree that a human being, whatever his talents, training, and aspirations, will think, intend, and behave exactly as he does given the totality of conditions in the moment. That is, whatever his ability as a golfer, Austin would miss that same putt a trillion times in a row—provided that every atom and charge in the universe was exactly as it had been the first time he missed it. You think this fact (we can call it determinism, as you do, but it includes the contributions of indeterminism as well, provided they remain the same) says nothing about free will. You think the fact that Austin could make nearly identical putts in other moments—with his brain in a slightly different state, the green a touch slower, and so forth—is all that matters. I agree that it is what matters when assessing his abilities as a golfer: Here, we don’t care about the single NMDA receptor that screwed up his swing on one particular putt; we care about the statistics of his play, round after round. But to speak clearly and honestly (that is, scientifically) about the actual causes of what happens in the world in each moment, we must focus on the particular.
What are we really saying when we claim that Austin could have made that putt (the one he missed)? As you point out, we aren’t actually referencing that putt at all. We are saying that Austin has made similar putts in the past and we can count on him to do so in the future—provided that he tries, doesn’t suffer some neurological injury, and so forth. However, we are also saying that Austin would have made this putt had something not gotten in his way. He had the general ability, after all, so something went wrong.
Then why did Austin miss his putt? Because some condition necessary for his making it was absent. What if that condition was sufficient effort, of the sort that he was generally capable of making? Why didn’t he make that effort? The answer is the same: Because some condition necessary for his making it was absent. From a scientific perspective, his failure to try is just another missed putt. Austin tried precisely as hard as he did. Next time he might try harder. But this time—with the universe and his brain exactly as they were—he couldn’t have tried in any other way.
To say that Austin really should have made that putt or tried harder is just a way of admonishing him to put forth greater effort in the future. We are not offering an account of what actually happened (his failure to sink his putt or his failure to try). You and I agree that such admonishments have effects and that these effects are perfectly in harmony with the truth of determinism. There is, in fact, nothing about incompatibilism that prevents us from urging one another (and ourselves) to do things differently in the future, or from recognizing that such exhortations often work. The things we say to one another (and to ourselves) are simply part of the chain of causes that determine how we think and behave.
But can we blame Austin for missing his putt? No. Can we blame him for not trying hard enough? Again, the answer is no—unless blaming him were just a way of admonishing him to try harder in the future. For us to consider him truly responsible for missing the putt or for failing to try, we would need to know that he could have acted other than he did. Yes, there are two readings of “could”—and you find only one of them interesting. But they are both interesting, and the one you ignore is morally consequential. One reading refers to a person’s (or a car’s, in your example) general capacities. Could Austin have sunk his putt, as a general matter, in similar situations? Yes. Could my car go 80 miles per hour, though I happen to be driving at 40? Yes. The other reading is one you consider to be a red herring. Could Austin have sunk that very putt, the one he missed? No. Could he have tried harder? No. His failure on both counts was determined by the state of the universe (especially his nervous system). Of course, it isn’t irrational to treat him as someone who has the general ability to make putts of that sort, and to urge him to try harder in the future—and it would be irrational to admonish a person who lacked such ability. You are right to believe that this distinction has important moral implications: Do we demand that mosquitoes and sharks behave better than they do? No. We simply take steps to protect ourselves from them. The same futility prevails with certain people—psychopaths and others whom we might deem morally insane. It makes sense to treat people who have the general capacity to behave well but occasionally lapse differently from those who have no such capacity and on whom any admonishment would be wasted. You are right to think that these distinctions do not depend on “absolute free will.” But this doesn’t mean nothing changes once we realize that a person could never have made the putt he just missed, or tried harder than he did, or refrained from killing his neighbor with a hammer.
Holding people responsible for their past actions makes no sense apart from the effects that doing so will have on them and the rest of society in the future (e.g. deterrence, rehabilitation, keeping dangerous people off our streets). The notion of moral responsibility, therefore, is forward-looking. But it is also paradoxical. People who have the most ability (self-control, opportunity, knowledge, etc.) would seem to be the most blameworthy when they fail or misbehave. For instance, when Tiger Woods misses a three-foot putt, there is a much greater temptation to say that he really should have made it than there is in the case of an average golfer. But Woods’s failure is actually more anomalous. Something must have gone wrong if a person of his ability missed so easy a putt. And he wouldn’t stand to benefit (much) from being admonished to try harder in the future. So in some ways, holding a person responsible for his failures seems to make even less sense the more worthy of responsibility he becomes in the conventional sense.
We agree that given past states of the universe and its laws, we can only do what we in fact do, and not do otherwise. You don’t think this truth has many psychological or moral consequences. In fact, you describe the lawful propagation of certain types of events as a form of “freedom.” But consider the emergence of this freedom in any specific human being: It is fully determined by genes and the environment (add as much randomness as you like). Imagine the first moment it comes online—in, say, the brain of a four-year-old child. Consider this first, “free” increment of executive control to emerge from the clockwork. It will emerge precisely to the degree that it does, and when, according to causes that have nothing to do with this person’s freedom. And it will perpetuate its effects on future states of his nervous system in total conformity to natural laws. In each instant, Austin will make his putt or miss it; and he will try his best or not. Yes, he is “free” to do whatever it is he does based on past states of the universe. But the same could be said of a chimp or a computer—or, indeed, a line of dominoes. Perhaps such mechanical equivalences don’t bother you, but they might come as a shock to those who think that you have rescued their felt sense of autonomy from the gears of determinism.
In your review, you called my book a “political tract.” The irony is that your argument against incompatibilism seems almost entirely political. At times you write as though nothing is at stake apart from the future of the terms free will and compatibilism. More generally, however, you seem to think that the consequences of taking incompatibilism seriously will be pernicious:
If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do. Preserving “law and order” without a concept of real responsibility is a daunting task.
These concerns, while not irrational, have nothing to do with the philosophical or scientific merits of the case. They also arise out of a failure to understand the practical consequences of my view. I am no more inclined to release dangerous criminals back onto our streets than you are.
In my book, I argue that an honest look at the causal underpinnings of human behavior, as well as at one’s own moment-to-moment experience, reveals free will to be an illusion. (I would say the same about the conventional sense of “self,” but that requires more discussion, and it is the topic of my next book.) I also claim that this fact has consequences—good ones, for the most part—and that is another reason it is worth exploring. But I have not argued for my position primarily out of concern for the consequences of accepting it. And I believe you have.
Of course, I can’t quite blame you for missing that putt, Dan. But I can admonish you to be more careful in the future.
- E. Nahmias, S. Morris, T. Nadelhoffer & J. Turner. 2005. “Surveying Freedom: Folk Intuitions about Free Will and Moral Responsibility,” Philosophical Psychology, 18, pp. 561–584.↩
- Reading the rest of Austin’s “notorious” footnote, I’m not sure he made the mistake you attribute to him. The very next sentence following your partial quotation reads, “But if I tried my hardest, say, and missed, surely there must have been something that caused me to fail, that made me unable to succeed? So that I could not have holed it.” To my eye, this closes the door on his alleged confusion about what subsequent experiments would have shown.↩
- You consistently label me a “hard determinist” which is a little misleading. The truth is that I am agnostic as to whether determinism is strictly true (though it must be approximately true, as far as human beings are concerned). Insofar as it is, free will is impossible. But indeterminism offers no relief. My actual view is that free will is conceptually incoherent and both subjectively and objectively nonexistent. Causation, whether deterministic or random, offers no basis for free will.↩
(Photo via Steven Kersting)
Daniel Dennett and I agree about many things, but we do not agree about free will. Dan has been threatening to set me straight on this topic for several years now, and I have always encouraged him to do so, preferably in public and in writing. He has finally produced a review of my book Free Will that is nearly as long as the book itself. I am grateful to Dan for taking the time to engage me this fully, and I will respond in the coming weeks.—SH
Daniel C. Dennett is the Austin B. Fletcher Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. He is the author of Breaking the Spell, Freedom Evolves, Darwin’s Dangerous Idea, Consciousness Explained, and many other books. He has received two Guggenheim Fellowships, a Fulbright Fellowship, and a Fellowship at the Center for Advanced Studies in Behavioral Science. He was elected to the American Academy of Arts and Sciences in 1987. His latest book, written with Linda LaScola, Caught in the Pulpit: Leaving Belief Behind.
This essay was first published at Naturalism.org and has been crossposted here with permission.
Sam Harris’s Free Will (2012) is a remarkable little book, engagingly written and jargon-free, appealing to reason, not authority, and written with passion and moral seriousness. This is not an ivory tower technical inquiry; it is in effect a political tract, designed to persuade us all to abandon what he considers to be a morally pernicious idea: the idea of free will. If you are one of the many who have been brainwashed into believing that you have—or rather, are—an (immortal, immaterial) soul who makes all your decisions independently of the causes impinging on your material body and especially your brain, then this is the book for you. Or, if you have dismissed dualism but think that what you are is a conscious (but material) ego, a witness that inhabits a nook in your brain and chooses, independently of external causation, all your voluntary acts, again, this book is for you. It is a fine “antidote,” as Paul Bloom says, to this incoherent and socially malignant illusion. The incoherence of the illusion has been demonstrated time and again in rather technical work by philosophers (in spite of still finding supporters in the profession), but Harris does a fine job of making this apparently unpalatable fact accessible to lay people. Its malignance is due to its fostering the idea of Absolute Responsibility, with its attendant implications of what we might call Guilt-in-the-eyes-of-God for the unfortunate sinners amongst us and, for the fortunate, the arrogant and self-deluded idea of Ultimate Authorship of the good we do. We take too much blame, and too much credit, Harris argues. We, and the rest of the world, would be a lot better off if we took ourselves—our selves—less seriously. We don’t have the kind of free will that would ground such Absolute Responsibility for either the harm or the good we cause in our lives.
All this is laudable and right, and vividly presented, and Harris does a particularly good job getting readers to introspect on their own decision-making and notice that it just does not conform to the fantasies of this all too traditional understanding of how we think and act. But some of us have long recognized these points and gone on to adopt more reasonable, more empirically sound, models of decision and thought, and we think we can articulate and defend a more sophisticated model of free will that is not only consistent with neuroscience and introspection but also grounds a (modified, toned-down, non-Absolute) variety of responsibility that justifies both praise and blame, reward and punishment. We don’t think this variety of free will is an illusion at all, but rather a robust feature of our psychology and a reliable part of the foundations of morality, law and society. Harris, we think, is throwing out the baby with the bathwater.
He is not alone among scientists in coming to the conclusion that the ancient idea of free will is not just confused but also a major obstacle to social reform. His brief essay is, however, the most sustained attempt to develop this theme, which can also be found in remarks and essays by such heavyweight scientists as the neuroscientists Wolf Singer and Chris Frith, the psychologists Steven Pinker and Paul Bloom, the physicists Stephen Hawking and Albert Einstein, and the evolutionary biologists Jerry Coyne and (when he’s not thinking carefully) Richard Dawkins.
The book is, thus, valuable as a compact and compelling expression of an opinion widely shared by eminent scientists these days. It is also valuable, as I will show, as a veritable museum of mistakes, none of them new and all of them seductive—alluring enough to lull the critical faculties of this host of brilliant thinkers who do not make a profession of thinking about free will. And, to be sure, these mistakes have also been made, sometimes for centuries, by philosophers themselves. But I think we have made some progress in philosophy of late, and Harris and others need to do their homework if they want to engage with the best thought on the topic.
I am not being disingenuous when I say this museum of mistakes is valuable; I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves. I have always suspected that many who hold this hard determinist view are making these mistakes, but we mustn’t put words in people’s mouths, and now Harris has done us a great service by articulating the points explicitly, and the chorus of approval he has received from scientists goes a long way to confirming that they have been making these mistakes all along. Wolfgang Pauli’s famous dismissal of another physicist’s work as “not even wrong” reminds us of the value of crystallizing an ambient cloud of hunches into something that can be shown to be wrong. Correcting widespread misunderstanding is usually the work of many hands, and Harris has made a significant contribution.
The first parting of opinion on free will is between compatibilists and incompatibilists. The latter say (with “common sense” and a tradition going back more than two millennia) that free will is incompatible with determinism, the scientific thesis that there are causes for everything that happens. Incompatibilists hold that unless there are “random swerves” that disrupt the iron chains of physical causation, none of our decisions or choices can be truly free. Being caused means not being free—what could be more obvious? The compatibilists deny this; they have argued, for centuries if not millennia, that once you understand what free will really is (and must be, to sustain our sense of moral responsibility), you will see that free will can live comfortably with determinism—if determinism is what science eventually settles on.
Incompatibilists thus tend to pin their hopes on indeterminism, and hence were much cheered by the emergence of quantum indeterminism in 20th century physics. Perhaps the brain can avail itself of undetermined quantum swerves at the sub-atomic level, and thus escape the shackles of physical law! Or perhaps there is some other way our choices could be truly undetermined. Some have gone so far as to posit an otherwise unknown (and almost entirely unanalyzable) phenomenon called agent causation, in which free choices are caused somehow by an agent, but not by any event in the agent’s history. One exponent of this position, Roderick Chisholm, candidly acknowledged that on this view every free choice is “a little miracle”—which makes it clear enough why this is a school of thought endorsed primarily by deeply religious philosophers and shunned by almost everyone else. Incompatibilists who think we have free will, and therefore determinism must be false, are known as libertarians (which has nothing to do with the political view of the same name). Incompatibilists who think that all human choices are determined by prior events in their brains (which were themselves no doubt determined by chains of events arising out of the distant past) conclude from this that we can’t have free will, and, hence, are not responsible for our actions.
This concern for varieties of indeterminism is misplaced, argue the compatibilists: free will is a phenomenon that requires neither determinism nor indeterminism; the solution to the problem of free will lies in realizing this, not banking on the quantum physicists to come through with the right physics—or a miracle. Compatibilism may seem incredible on its face, or desperately contrived, some kind of a trick with words, but not to philosophers. Compatibilism is the reigning view among philosophers (just over 59%, according to the 2009 Philpapers survey) with libertarians coming second with 13% and hard determinists only 12%. It is striking, then, that all the scientists just cited have landed on the position rejected by almost nine out of ten philosophers, but not so surprising when one considers that these scientists hardly ever consider the compatibilist view or the reasons in its favor.
Harris has considered compatibilism, at least cursorily, and his opinion of it is breathtakingly dismissive: After acknowledging that it is the prevailing view among philosophers (including his friend Daniel Dennett), he asserts that “More than in any other area of academic philosophy, the result resembles theology.” This is a low blow, and worse follows: “From both a moral and a scientific perspective, this seems deliberately obtuse.” (18) I would hope that Harris would pause at this point to wonder—just wonder—whether maybe his philosophical colleagues had seen some points that had somehow escaped him in his canvassing of compatibilism. As I tell my undergraduate students, whenever they encounter in their required reading a claim or argument that seems just plain stupid, they should probably double check to make sure they are not misreading the “preposterous” passage in question. It is possible that they have uncovered a howling error that has somehow gone unnoticed by the profession for generations, but not very likely. In this instance, the chances that Harris has underestimated and misinterpreted compatibilism seem particularly good, since the points he defends later in the book agree right down the line with compatibilism; he himself is a compatibilist in everything but name!
Seriously, his main objection to compatibilism, issued several times, is that what compatibilists mean by “free will” is not what everyday folk mean by “free will.” Everyday folk mean something demonstrably preposterous, but Harris sees the effort by compatibilists to make the folks’ hopeless concept of free will presentable as somehow disingenuous, unmotivated spin-doctoring, not the project of sympathetic reconstruction the compatibilists take themselves to be engaged in. So it all comes down to who gets to decide how to use the term “free will.” Harris is a compatibilist about moral responsibility and the importance of the distinction between voluntary and involuntary actions, but he is not a compatibilist about free will since he thinks “free will” has to be given the incoherent sense that emerges from uncritical reflection by everyday folk. He sees quite well that compatibilism is “the only philosophically respectable way to endorse free will” (p. 16) but adds:
However, the ‘free will’ that compatibilists defend is not the free will that most people feel they have. (p. 16)
First of all, he doesn’t know this. This is a guess, and suitably expressed questionnaires might well prove him wrong. That is an empirical question, and a thoughtful pioneering attempt to answer it suggests that Harris’s guess is simply mistaken. The newly emerging field of experimental philosophy (or “X-phi”) has a rather unprepossessing track record to date, but these are early days, and some of the work has yielded interesting results that certainly defy complacent assumptions common among philosophers. The study by Nahmias et al. 2005 found substantial majorities (between 60 and 80%) in agreement with propositions that are compatibilist in outlook, not incompatibilist.
Harris’s claim that the folk are mostly incompatibilists is thus dubious on its face, and even if it is true, maybe all this shows is that most people are suffering from a sort of illusion that could be replaced by wisdom. After all, most people used to believe the sun went around the earth. They were wrong, and it took some heavy lifting to convince them of this. Maybe this factoid is a reflection on how much work science and philosophy still have to do to give everyday laypeople a sound concept of free will. We’ve not yet succeeded in getting them to see the difference between weight and mass, and Einsteinian relativity still eludes most people. When we found out that the sun does not revolve around the earth, we didn’t then insist that there is no such thing as the sun (because what the folk mean by “sun” is “that bright thing that goes around the earth”). Now that we understand what sunsets are, we don’t call them illusions. They are real phenomena that can mislead the naive.
To see the context in which Harris’s criticism plays out, consider a parallel. The folk concept of mind is a shambles, for sure: dualistic, scientifically misinformed and replete with miraculous features—even before we get to ESP and psychokinesis and poltergeists. So when social scientists talk about beliefs or desires and cognitive neuroscientists talk about attention and memory they are deliberately using cleaned-up, demystified substitutes for the folk concepts. Is this theology, is this deliberately obtuse, countenancing the use of concepts with such disreputable ancestors? I think not, but the case can be made (there are mad dog reductionist neuroscientists and philosophers who insist that minds are illusions, pains are illusions, dreams are illusions, ideas are illusions—all there is is just neurons and glia and the like). The same could be said about color, for example. What everyday folk think colors are—if you pushed them beyond their everyday contexts in the paint store and picking out their clothes—is hugely deluded; that doesn’t mean that colors are an illusion. They are real in spite of the fact that, for instance, atoms aren’t colored.
Here are some more instances of Harris’s move:
We do not have the freedom we think we have. (p. 5)
Who’s we? Maybe many people, maybe most, think that they have a kind of freedom that they don’t and can’t have. But that settles nothing. There may be other, better kinds of freedom that people also think they have, and that are worth wanting (Dennett, 1984).
We do not know what we intend to do until the intention itself arises. [True, but so what?] To understand this is to realize that we are not the authors of our thoughts and actions in the way that people generally suppose [my italics]. (p. 13)
Again, so what? Maybe we are authors of our thoughts and actions in a slightly different way. Harris doesn’t even consider that possibility (since that would require taking compatibilist “theology” seriously).
If determinism is true, the future is set—and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens. There is no combination of these truths that seem compatible with the popular notion of free will [my italics]. (p. 30)
Again, the popular notion of free will is a mess; we knew that long before Harris sat down to write his book. He needs to go after the attempted improvements, and it cannot be part of his criticism that they are not the popular notion.
There is also another problem with this paragraph: the sentence about indeterminism is false:
And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens.
Here is a counterexample, contrived, but highlighting the way indeterminism could infect our actions and still leave us responsible (a variant of an old—1978—counterexample of mine):
You must correctly answer three questions to save the world from a space pirate, who provides you with a special answering gadget. It has two buttons marked YES and NO and two foot pedals marked YES and NO. A sign on the gadget lights up after every question “Use the buttons” or “Use the pedals.” You are asked “is Chicago the capital of Illinois?”, the sign says “Use the buttons” and you press the No button with your finger. Then you are asked “Are Dugongs mammals?”, the sign says “Use the buttons” and you press the Yes button with your finger. Finally you are asked “Are proteins made of amino acids?” and the sign says “Use the pedals” so you reach out with your foot and press the Yes pedal. A roar of gratitude goes up from the crowd. You’ve saved the world, thanks to your knowledge and responsible action! But all three actions were unpredictable by Laplace’s demon because whether the light said “Button” or “Pedals” was caused by a quantum random event. In a less obvious way, random perturbations could infect (without negating) your every deed. The tone of your voice when you give your evidence could be tweaked up or done, the pressure of your trigger finger as you pull the trigger could be tweaked greater or lesser, and so forth, without robbing you of responsibility. Brains are, in all likelihood, designed by natural selection to absorb random fluctuations without being seriously diverted by them—just as computers are. But that means that randomness need not destroy the rationality, the well-governedness, the sense-making integrity of your control system. Your brain may even exploit randomness in a variety of ways to enhance its heuristic search for good solutions to problems.
These are not new ideas. For instance I have defended them explicitly in 1978, 1984, and 2003. I wish Harris had noticed that he contradicts them here, and I’m curious to learn how he proposes to counter my arguments.
Another mistake he falls for—in very good company—is the mistake the great J. L. Austin makes in his notorious footnote about his missed putt. First Austin’s version, and my analysis of the error, and then Harris’s version.
Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were [my italics], and asserting that I could have holed it. There is the rub. Nor does ‘I can hole it this time’ mean that I shall hole it this time if I try or if anything else; for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time [my italics], although I did not. (Austin 1961: 166. [“Ifs and Cans,” in Austin, Philosophical Papers, edited by J. Urmson and G. Warnock, Oxford, Clarendon Press.])
Austin claims to be talking about conditions as they precisely were, but if so, then further experiments could not confirm his belief. Presumably he has in mind something like this: he could line up ten “identical” putts on the same green and, say, sink nine out of ten. This would show, would it not, that he could have made that putt? Yes, to the satisfaction of almost everybody, but No, if he means under conditions “as they precisely were,” for conditions were subtly different in every subsequent putt—the sun a little lower in the sky, the green a little drier or moister, the temperature or wind direction ever so slightly different, Austin himself older and maybe wiser, or maybe more tired, or maybe more relaxed. This variation is not a bug to be eliminated from such experiments, but a feature without which experiments could not show that Austin “could have done otherwise,” and this is precisely the elbow room we need to see that “could have done otherwise” is perfectly compatible with determinism, because it never means, in real life, what philosophers have imagined it means: replay exactly the same “tape” and get a different result. Not only can such an experiment never be done; if it could, it wouldn’t’ show what needed showing: something about Austin’s ability as a golfer, which, like all abilities, needs to be demonstrated to be robust under variation.
Here is Harris’ version of the same mistake:
To say that they were free not to rape and murder is to say that they could have resisted the impulse to do so (or could have avoided feeling such an impulse altogether)—with the universe, including their brains, in precisely the same state it was in at the moment they committed their crimes. (p. 17)
Just not true. If we are interested in whether somebody has free will, it is some kind of ability that we want to assess, and you can’t assess any ability by “replaying the tape.” (See my extended argument to this effect in Freedom Evolves, 2003) The point was made long ago by A. M. Honoré in his classic paper “Can and Can’t,” in Mind, 1964, and more recently deeply grounded in Judea Pearl’s Causality: Models, Reasoning and Inference, [CUP] 2000. This is as true of the abilities of automobiles as of people. Suppose I am driving along at 60 MPH and am asked if my car can also go 80 MPH. Yes, I reply, but not in precisely the same conditions; I have to press harder on the accelerator. In fact, I add, it can also go 40 MPH, but not with conditions precisely as they are. Replay the tape till eternity, and it will never go 40MPH in just these conditions. So if you want to know whether some rapist/murderer was “free not to rape and murder,” don’t distract yourself with fantasies about determinism and rewinding the tape; rely on the sorts of observations and tests that everyday folk use to confirm and disconfirm their verdicts about who could have done otherwise and who couldn’t.
One of the effects of Harris’s misconstruing compatibilism is that when he turns to the task of avoiding the dire conclusions of the hard determinists, he underestimates his task. At the end of the book, he gets briefly concessive, throwing a few scraps to the opposition:
And it is wise to hold people responsible for their actions when doing so influences their behavior and brings benefit to society. But this does not mean that we must be taken in by the illusion of free will. We need only acknowledge that efforts matter and that people can change. We do not change ourselves, precisely—because we have only ourselves with which to do the changing—but we continually influence, and are influenced by, the world around us and the world within us. It may seem paradoxical to hold people responsible for what happens in their corner of the universe, but once we break the spell of free will, we can do this precisely to the degree that it is useful. Where people can change, we can demand that they do so. Where change is impossible, or unresponsive to demands, we can chart some other course. (p. 63)
Harris should take more seriously the various tensions he sets up in this passage. It is wise to hold people responsible, he says, even though they are not responsible, not really. But we don’t hold everybody responsible; as he notes, we excuse those who are unresponsive to demands, or in whom change is impossible. That’s an important difference, and it is based on the different abilities or competences that people have. Some people (are determined to) have the abilities that justify our holding them responsible, and some people (are determined to) lack those abilities. But determinism doesn’t do any work here; in particular it doesn’t disqualify those we hold responsible from occupying that role. In other words, real responsibility, the kind the everyday folk think they have (if Harris is right), is strictly impossible; but when those same folk wisely and justifiably hold somebody responsible, that isn’t real responsibility!
And what is Harris saying about whether we can change ourselves? He says we can’t change ourselves “precisely” but we can influence (and hence change) others, and they can change us. But then why can’t we change ourselves by getting help from others to change us? Why, for that matter, can’t we do to ourselves what we do to those others, reminding ourselves, admonishing ourselves, reasoning with ourselves? It does work, not always but enough to make to worth trying. And notice: if we do things to influence and change others, and thereby turn them into something bad—encouraging their racist or violent tendencies, for instance, or inciting them to commit embezzlement, we may be held responsible for this socially malign action. (Think of the drunk driving laws that now hold the bartender or the party host partly responsible for the damage done.) But then by the same reasoning we can justifiably be held responsible for influencing ourselves, for good or ill. We can take some credit for any improvements we achieve in others—or ourselves—and we can share the blame for any damage we do to others or ourselves.
There are complications with all this, but Harris doesn’t even look at the surface of these issues. For instance, our capacities to influence ourselves are themselves only partly the result of earlier efforts at self-improvement in which we ourselves played a major role. It takes a village to raise a child, as Hilary Clinton has observed. In the end, if we trace back far enough to our infancy or beyond, we arrive at conditions that we were just lucky (or unlucky) to be born with. This undeniable fact is not the disqualifier of responsibility that Harris and others assume. It disqualifies us for “Ultimate” responsibility, which would require us to be—like God!—causa sui, the original cause of ourselves, as Galen Strawson has observed, but this is nonsense. Our lack of Ultimate responsibility is not a moral blemish; if the discovery of this lack motivates some to reform our policies of reward and punishment, that is a good result, but it is hardly compelled by reason.
This emerging idea, that we can justifiably be held to be the authors (if not the Authors) of not only our deeds but the character from which our deeds flow, undercuts much of the rhetoric in Harris’s book. Harris is the author of his book; he is responsible for both its virtues, for which he deserves thanks, and its vices, for which he may justifiably be criticized. But then why can we not generalize this point to Harris himself, and rightly hold him at least partly responsible for his character since it too is a product—with help from others, of course—of his earlier efforts? Suppose he replied that he is not really the author of Free Will. At what point do we get to use Harris’s criticism against his own claims? Harris might claim that he is not really responsible, isn’t really the author of his own book, isn’t really responsible, but that isn’t what the folk would say. The folk believe in a kind of responsibility that is exemplified by Harris’s authorship. Harris would have distorted the folk notion of responsibility as much if not more than compatibilists have distorted the folk notion of free will.
Harris opens his book with an example of murderous psychopaths, Hayes and Komisarjevsky, who commit unspeakable atrocities. One has shown remorse, the other reports having been abused as a child.
Whatever their conscious motives, these men cannot know why they are as they are. Nor can we account for why we are not like them.
Really? I think we can. The sentence is ambiguous, in fact. Harris knows full well that we can provide detailed and empirically supported accounts of why normal, law-abiding people who would never commit those atrocities emerge by the millions from all sorts of backgrounds, and why these psychopaths are different. But he has a different question in mind: why we—you and I—are in the fortunate, normal class instead of having been doomed to psychopathy. A different issue, but also an irrelevant, merely metaphysical issue. (Cf. “Why was I born in the 20th century, and not during the Renaissance? We’ll never know!”)
The rhetorical move here is well-known, but indefensible. If you’re going to raise these horrific cases, it behooves you to consider that they might be cases of pathology, as measured against (moral) health. Lumping the morally competent with the morally incompetent and then saying “there really is no difference between them, is there?” is a move that needs support, not something that can be done by assumption or innuendo.
I cannot take credit for the fact that I don’t have the soul of a psychopath. (p. 4)
True—and false. Harris can’t take credit for the luck of his birth, his having had a normal moral education—that’s just luck—but those born thus lucky are informed that they have a duty or obligation to preserve their competence, and grow it, and educate themselves, and Harris has responded admirably to those incentives. He can take credit, not Ultimate credit, whatever that might be, but partial credit, for husbanding the resources he was endowed with. As he says, he is just lucky not to have been born with Komisarjevsky’s genes and life experiences. If he had been, he’d have been Komisarjevsky!
A similar difficulty infects his claim that there is no difference between an act caused by a brain tumor and an act caused by a belief (which is just another brain state, after all).
But a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions. Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it. (p. 5)
Notice the use of “appears” and “seem” here. Replace them both with “is” and ask if he’s made the case. (In addition to the “surely”-alarm I recommend all readers install in their brains (2013), a “seems”–alarm will pick up lots of these slippery places where philosophers defer argument where argument is called for.
Even the simplest and most straightforward of Harris’s examples wilt under careful scrutiny:
Did I consciously choose coffee over tea? No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence. (p. 7)
Not so. He can influence those internal, unconscious actions—by reminding himself, etc. He just can’t influence them at the moment they are having their effect on his choice. (He also can’t influence the unconscious machinery that determines whether he returns a tennis serve with a lob or a hard backhand once the serve is on its way, but that doesn’t mean his tennis strokes are involuntary or outside his—indirect—control. At one point he says “If you don’t know what your soul is going to do, you are not in control.” (p. 12) Really? When you drive a car, are you not in control? You know “your soul” is going to do the right thing, whatever in the instant it turns out to be, and that suffices to demonstrate to you, and the rest of us, that you are in control. Control doesn’t get any more real than that.)
Harris ignores the reflexive, repetitive nature of thinking. My choice at time t can influence my choice at time t’ which can influence my choice at time t”. How? My choice at t can have among its effects the biasing of settings in my brain (which I cannot directly inspect) that determine (I use the term deliberately) my choice at t’. I can influence my choice at t’. I influenced it at time t (without “inspecting” it). Like many before him, Harris shrinks the me to a dimensionless point, “the witness” who is stuck in the Cartesian Theater awaiting the decisions made elsewhere. That is simply a bad theory of consciousness.
I, as the conscious witness of my experience, no more initiate events in my prefrontal cortex than I cause my heart to beat. (p. 9)
If this isn’t pure Cartesianism, I don’t know what it is. His prefrontal cortex is part of the I in question. Notice that if we replace the “conscious witness” with “my brain” we turn an apparent truth into an obvious falsehood: “My brain can no more initiate events in my prefrontal cortex than it can cause my heart to beat.”
There are more passages that exhibit this curious tactic of heaping scorn on daft doctrines of his own devising while ignoring reasonable compatibilist versions of the same ideas, but I’ve given enough illustrations, and the rest are readily identifiable once you see the pattern. Harris clearly thinks compatibilism is not worth his attention (so “deliberately obtuse” is it), but after such an indictment, he better come up with some impressive criticisms. His main case against compatibilism—aside from the points above that I have already criticized—consists of three rhetorical questions lined up in a row (pp. 18-19). Each one collapses on closer inspection. As I point out in Intuition Pumps and Other Tools for Thinking, rhetorical questions, which are stand-ins for reductio ad absurdum arguments so obvious that they need not be spelled out, should always be scrutinized as likely weak spots in arguments.. I offer Harris’s trio as exhibits A,B, and C:
(A) You want to finish your work, but you are also inclined to stop working so that you can play with your kids. You aspire to quite smoking, but you also crave another cigarette. You are struggling to save money, but you are also tempted to buy a new computer. Where is the freedom when one of these opposing desires inexplicably [my italics] triumphs over its rival?
But no compatibilist has claimed (so far as I know) that our free will is absolute and trouble-free. On the contrary there is a sizable and fascinating literature on the importance of the various well-known ways in which we respond to such looming cases of “weakness of will,” from which we all suffer. When one desire triumphs, this is not usually utterly inexplicable, but rather the confirmable result of efforts of self-manipulation and self-education, based on empirical self-exploration. We learn something about what makes us tick—not usually in neuroscientific terms, but rather in terms of folk psychology—and design a strategy to correct the blind spots we find, the biases we identify. That practice undeniably occurs, and undeniably works to a certain extent. We can improve our self-control, and this is a morally significant fact about the competence of normal adults—the only people whom we hold fully (but not “absolutely” or “deeply”) responsible. Remove the word “inexplicably” from exhibit A and the rhetorical question has a perfectly good answer: in many cases our freedom is an achievement, for which we are partly responsible. (Yes, luck plays a role but so does skill; we are not just lucky. (Dennett, 1984)
(B) The problem for compatibiism runs deeper, however—for where is the freedom in wanting what one wants without any internal conflict whatsoever?
To answer a rhetorical question with another, so long as one can get what one wants so wholeheartedly, what could be better? What could be more freedom than that? Any realistic, reasonable account of free will acknowledges that we are stuck with some of our desires: for food and comfort and love and absence of pain—and the freedom to do what we want. We can’t not want these, or if we somehow succeed in getting ourselves into such a sorry state, we are pathological. These are the healthy, normal, sound, wise desires on which all others must rest. So banish the fantasy of any account of free will that is screwed so tight it demands that we aren’t free unless all our desires and meta-desires and meta-meta-desires are optional, choosable. Such “perfect” freedom is, of course, an incoherent idea, and if Harris is arguing against it, he is not finding a “deep” problem with compatibilism but a shallow problem with his incompatibilist vision of free will; he has taken on a straw man, and the straw man is beating him.
(C) Where is the freedom in being perfectly satisfied with your thoughts, intentions, and subsequent actions when they are the product of prior events that you had absolutely no hand in creating?
Not only has he not shown that you had absolutely no hand in creating those prior events, but it is false, as just noted. Once you stop thinking of free will as a magical metaphysical endowment and start thinking of it as an explicable achievement that individual human beings normally accomplish (very much aided by the societies in which they live), much as they learn to speak and read and write, this rhetorical question falls flat. Infants don’t have free will; normal adults do. Yes, those of us who have free will are lucky to have free will (we’re lucky to be human beings, we’re lucky to be alive), but our free will is not just a given; it is something we are obliged to protect and nurture, with help from our families and friends and the societies in which we live.
Harris allows himself one more rhetorical question on page 19, and this one he emphatically answers:
(D) Am I free to do that which does not occur to me to do? Of course not.
Again, really? You’re playing bridge and trying to decide whether or not to win the trick in front of you. You decide to play your ace, winning the trick. Were you free to play a low card instead? It didn’t occur to you (it should have, but you acted rather thoughtlessly, as your partner soon informs you). Were you free to play your six instead? In some sense. We wouldn’t play games if there weren’t opportunities in them to make one choice or another. But, comes the familiar rejoinder, if determinism is true and we rewound the tape of time and put you in exactly the same physical state, you’d ignore the six of clubs again. True, but so what? It does not show that you are not the agent you think you are. Contrast your competence at this moment with the “competence” of a robotic bridge-playing doll that always plays its highest card in the suit, no matter what the circumstances. It wasn’t free to choose the six, because it would play the ace whatever the circumstances were whereas if it occurred to you to play the six, you could do it, depending on the circumstances. Freedom involves the ability to have one’s choices influenced by changes in the world that matter under the circumstances. Not a perfect ability, but a reliable ability. If you are such a terrible bridge player that you can never see the virtue in ducking a trick, playing less than the highest card in your hand, then your free will at the bridge table is seriously abridged: you are missing the opportunities that make bridge an interesting game. If determinism is true, are these real opportunities? Yes, as real as an opportunity could be: thanks to your perceptual apparatus, your memory, and the well-lit environment, you are caused/determined to evaluate the situation as one that calls for playing the six, and you play the six.
Turn to page 20 and get one more rhetorical question:
(E) And there is no way I can influence my desires—for what tools of influence would I use? Other desires?
Yes, for starters. Once again, Harris is ignoring a large and distinguished literature that defends this claim. We use the same tools to influence our own desires as we use to influence other people’s desires. I doubt that he denying that we ever influence other people’s desires. His book is apparently an attempt to influence the beliefs and desires of his readers, and it seems to have worked rather better than I would like. His book also seems to have influenced his own beliefs and desires: writing it has blinded him to alternatives that he really ought to have considered. So his obliviousness is something for which he himself is partly responsible, having labored to create a mindset that sees compatibilism as deliberately obtuse.
When Harris turns to a consideration of my brand of compatibilism, he quotes at length from a nice summary of it by Tom Clark, notes that I have approved of that summary, and then says that it perfectly articulates the difference between my view and his own. And this is his rebuttal:
As I have said, I think compatibilists like Dennett change the subject: They trade a psychological fact—the subjective experience of being a conscious agent—for a conceptual understanding of ourselves as persons. This is a bait and switch. The psychological truth is that people feel identical to a certain channel of information in their conscious minds. Dennett is simply asserting that we are more than this—we are coterminous with everything that goes on inside our bodies, whether we are conscious of it or not. This is like saying we are made of stardust—which we are. But we don’t feel like stardust. And the knowledge that we are stardust is not driving our moral intuitions or our system of criminal justice. (p. 23)
I have thought long and hard about this passage, and I am still not sure I understand it, since it seems to be at war with itself. Harris apparently thinks you see yourself as a conscious witness, perhaps immaterial—an immortal soul, perhaps—that is distinct from (the rest of?) your brain. He seems to be saying that this folk understanding people have of what they are identical to must be taken as a “psychological fact” that anchors any discussion of free will. And then he notes that I claim that this folk understanding is just plain wrong and try to replace it with a more scientifically sound version of what a conscious person is. Why is it “bait and switch” if I claim to improve on the folk version of personhood before showing how it allows for free will? He can’t have it both ways. He is certainly claiming in his book that the dualism that is uncritically endorsed by many, maybe most, people is incoherent, and he is right—I’ve argued the same for decades. But then how can he object that I want to replace the folk conception of free will based on that nonsense with a better one? The fact that the folk don’t feel as if they are larger than their imagined Cartesian souls doesn’t count against my account, since I am proposing to correct the mistake manifest in that “psychological fact” (if it is one). And if Harris thinks that it is this folk notion of free will that “drives our moral intuitions and our legal system” he should tackle the large literature that says otherwise. (starting with, e.g., Stephen Morse).
One more rhetorical question:
(G) How can we be ‘free’ as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware? We can’t. (p. 25-26)
Let’s take this apart, separating its elements. First let’s try dropping the last clause: “of which we are entirely unaware”.
How can we be ‘free’ as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend?
Well, if the events that cause your intentions are thoughts about what the best course of action probably is, and why it is the right thing to do, then that causation strikes me as the very epitome of freedom: you have the ability to intend exactly what you think to be the best course of action. When folks lack that ability, when they find they are unable to act intentionally on the courses of action they deem best, all things considered, we say they suffer from weakness of will. An intention that was an apparently causeless orphan, arising for no discernible reason, would hardly be seen as free; it would be viewed as a horrible interloper, as in alien hand syndrome, imposed on the agent from who knows where.
Now let’s examine the other half of Harris’s question:
How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain of which we are entirely unaware?
I don’t always have to reflect, consciously, on my reasons for my intentions for them to be both mine and free. When I say “thank you” to somebody who gives me something, it is “force of habit” and I am entirely unaware of the events in my brain that cause me to say it but it is nonetheless a good example of a free action. Had I had a reason to override the habit, I would have overridden it. My not doing so tacitly endorses it as an action of mine. Most of the intentions we frame are like this, to one degree or another: we “instinctively” reach out and pull the pedestrian to safety without time for thinking; we rashly adopt a sarcastic tone when replying to the police officer, we hear the doorbell and jump up to see who’s there. These are all voluntary actions for which we are normally held responsible if anything hinges on them. Harris notes that the voluntary/involuntary distinction is a valuable one, but doesn’t consider that it might be part of the foundation of our moral and legal understanding of free will. Why not? Because he is so intent on bashing a caricature doctrine.
He ends his chapter on compatibilism with this:
People feel that they are the authors of their thoughts and actions, and this is the only reason why there seems to be a problem of free will worth talking about. (p. 26)
I can agree with this, if I am allowed to make a small insertion:
People feel that they are the authors of their thoughts and actions, and interpreted uncharitably, their view can be made to appear absurd; taken the best way, however, they can be right; and this is the only reason why there seems to be a problem of free will worth talking about.
One more puzzling assertion:
Thoughts like “What should I get my daughter for her birthday? I know—I’ll take her to a pet store and have her pick out some tropical fish” convey the apparent reality of choices, freely made. But from a deeper perspective (speaking both objectively and subjectively) thoughts simply arise unauthored and yet author our actions. (p. 53)
What would an authored thought look like, pray tell? And how can unauthored thoughts author our actions? Does Harris mean cause, shape and control our actions? But if an unauthored thought can cause, shape and control something, why can’t a whole person cause, shape and control something? Probably this was misspeaking on Harris’s part. He should have said that unauthored thoughts are the causes, shapers and controllers—but not the authors—of our actions. Nothing could be an author, not really. But here again Harris is taking an everyday, folk notion of authorship and inflating it into metaphysical nonsense. If he can be the author of his book, then he can be the author of his thoughts. If he is not the author of Free Will, he should take his name off the cover, shouldn’t he? But he goes on immediately to say he is the cause of his book, and “If I had not decided to write this book, it wouldn’t have written itself.”
Decisions, intentions, efforts, goals, willpower, etc., are causal states of the brain, leading to specific behaviors, and behaviors lead to outcomes in the world. Human choice, therefore, is as important as fanciers of free will believe. But the next choice you make will come out of the darkness of prior causes that you, the conscious witness of your experience, did not bring into being. (p. 34)
We’ve already seen that the last sentence is false. But notice that if it were true, then it would be hard to see why “human choice is important”—except in the way lightning bolts are important (they can do a lot of damage). If your choices “come out of the darkness” and you did not bring them into being, then they are like the involuntary effusions of sufferers from Tourette’s Syndrome, who blurt out obscenities and make gestures that are as baffling to them as to others. In fact we know very well that I can influence your choices, and you can influence my choices, and even your own choices, and that this “bringing into being” of different choices is what makes them morally important. That’s why we exhort and chastise and instruct and praise and encourage and inform others and ourselves.
Harris draws our attention to how hard it can be to change our bad habits, in spite of reading self-help books and many self-admonitions. These experiences, he notes, “are not even slightly suggestive of freedom of the will” (p. 35). True, but then other experiences we have are often very suggestive of free will. I make a promise, I solemnly resolve to keep it, and happily, I do! I hate grading essays, but recognizing that my grades are due tomorrow, I reluctantly sit down and grind through them. I decide to drive to Boston and lo and behold, the next thing I know I’m behind the wheel of my car driving to Boston! If I could almost never do such things I would indeed doubt my own free will, and toy with the sad conclusion that somewhere along the way I had become a helpless victim of my lazy habits and no longer had free will. Entirely missing from Harris’s account—and it is not a lacuna that can be repaired—is any acknowledgment of the morally important difference between normal people (like you and me and Harris, in all likelihood) and people with serious deficiencies in self-control. The reason he can’t include this missing element is that his whole case depends in the end on insisting that there really is no morally relevant difference between the raving psychopath and us. We have no more free will than he does. Well, we have more something than he does, and it is morally important. And it looks very much like what everyday folks often call free will.
Of course you can create a framework in which certain decisions are more likely than others—you can, for instance, purge your house of all sweets, making it very unlikely that you will eat dessert later in the evening—but you cannot know why you were able to submit to such a framework today when you weren’t yesterday. (p. 38)
Here he seems at first to be acknowledging the very thing I said was missing in his account above—the fact that you can take steps to bring about an alteration in your circumstances that makes a difference to your subsequent choices. But notice that his concession is short-lived, because he insists that you are just as in the dark about how your decision to purge your house of all sweets came about. But that is, or may well be, false. You may know exactly what train of thought led you to that policy. But then, you can’t know why that train of thought occurred to you, and moved you then. No, you can, and often do. Maybe your candy-banishing is the nthlevel result of your deciding to decide to decide to decide to decide . . . . to do something about your health. But since the regress is infinite, you can’t be responsible! Nonsense. You can’t be “ultimately responsible” (as Galen Strawson has argued) but so what? You can be partially, largely responsible.
I cannot resist ending this catalogue of mistakes with the one that I find most glaring: the cover of Harris’s little book, which shows marionette strings hanging down. The point, which he reiterates several times in the book, is that the prior causes (going back to the Big Bang, if you like) that determine your choices are like the puppeteer who determines the puppet’s every action, every “decision.” This analogy enables him to get off a zinger:
Compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings. (p. 20)
This is in no way supported by anything in his discussion of compatibilism. Somehow Harris has missed one of the deepest points made by Von Neumann and Morgenstern in their introduction to their ground-breaking 1953 book, Theory of Games and Economic Behavior, [Princeton UP, John and Oskar]. Whereas Robinson Crusoe alone on his desert island can get by with probabilities and expected utility theory, as soon as there is a second agent to deal with, he needs to worry about feedback, secrecy and the intentions of the other agent or agents (what I have called intentional systems). For this he needs game theory. There is a fundamental difference between an environment with no competing agents and an environment populated with would-be manipulators. The manifold of causes that determine our choices only intermittently includes other agents, and when they are around they do indeed represent a challenge to our free will, since they may well try to read our minds and covertly influence our beliefs, but the environment in general is not such an agent, and hence is no puppeteer. When sunlight bouncing off a ripe apple causes me to decide to reach up and pick it off the tree, I am not being controlled by that master puppeteer, Captain Worldaroundme. I am controlling myself, thanks to the information I garner from the world around me. Please, Sam, don’t feed the bugbears. (Dennett, 1984)
Harris half recognizes this when later in the book he raises puppets one more time:
It is one thing to bicker with your wife because you are in a bad mood; it is another to realize that your mood and behavior have been caused by low blood sugar. This understanding reveals you to be a biochemical puppet, of course, but it also allows you to grab hold of one of your strings. A bite of food may be all that your personality requires. Getting behind our conscious thoughts and feelings can allow us to steer a more intelligent course through our lives (while knowing, of course, that we are ultimately being steered). (p. 47)
So unlike the grumpy child (or moody bear), we intelligent human adults can “grab hold of one of our strings”. But then if our bodies are the puppets and we are the puppeteers, we can control our bodies, and thereby our choices, and hence can be held responsible—really but not Ultimately responsible—for our actions and our characters. We are not immaterial souls but embodied rational agents, determined (in two senses) to do what is right, most of the time, and ready to be held responsible for our deeds.
Harris, like the other scientists who have recently mounted a campaign to convince the world that free will is an illusion, has a laudable motive: to launder the ancient stain of Sin and Guilt out of our culture, and abolish the cruel and all too usual punishments that we zestfully mete out to the Guilty. As they point out, our zealous search for “justice” is often little more than our instinctual yearning for retaliation dressed up to look respectable. The result, especially in the United States, is a barbaric system of imprisonment—to say nothing of capital punishment—that should make all citizens ashamed. By all means, let’s join hands and reform the legal system, reduce its excesses and restore a measure of dignity—and freedom!—to those whom the state must punish. But the idea that all punishment is, in the end, unjustifiable and should be abolished because nobody is ever really responsible, because nobody has “real” free will is not only not supported by science or philosophical argument; it is blind to the chilling lessons of the not so distant past. Do we want to medicalize all violators of the laws, giving them indefinitely large amounts of involuntary “therapy” in “asylums” (the poor dears, they aren’t responsible, but for the good of the society we have to institutionalize them)? I hope not. But then we need to recognize the powerful (consequentialist) arguments for maintaining a system of punishment (and reward). Punishment can be fair, punishment can be justified, and in fact, our societies could not manage without it.
This discussion of punishment versus medicalization may seem irrelevant to Harris’s book, and an unfair criticism, since he himself barely alludes to it, and offers no analysis of its possible justification, but that is a problem for him. He blandly concedes we will—and should—go on holding some people responsible but then neglects to say what that involves. Punishment and reward? If not, what does he mean? If so, how does he propose to regulate and justify it? I submit that if he had attempted to address these questions he would have ended up with something like this:
Those eligible for punishment and reward are those with the general abilities to respond to reasons (warnings, threats, promises) rationally. Real differences in these abilities are empirically discernible, explicable, and morally relevant. Such abilities can arise and persist in a deterministic world, and they are the basis for a justifiable policy of reward and punishment, which brings society many benefits—indeed makes society possible. (Those who lack one or another of the abilities that constitute this moral competence are often said, by everyday folk, to lack free will, and this fact is the heart for compatibilism.)
If you think that the fact that incompatibilist free will is an illusion demonstrates that no punishment can ever be truly deserved, think again. It may help to consider all these issues in the context of a simpler phenomenon: sports. In basketball there is the distinction between ordinary fouls and flagrant fouls, and in soccer there is the distinction between yellow cards and red cards, to list just two examples. Are these distinctions fair? Justified? Should Harris be encouraged to argue that there is no real difference between the dirty player and the rest (and besides, the dirty player isn’t responsible for being a dirty player; just look at his upbringing!)? Everybody who plays games must recognize that games without strictly enforced rules are not worth playing, and the rules that work best do not make allowances for differences in heritage, training, or innate skill. So it is in society generally: we are all considered equal under the law, presumed to be responsible until and unless we prove to have some definite defect or infirmity that robs us of our free will, as ordinarily understood.
- The random swerve or clinamen is an idea going back to Lucretius more than two thousand years ago, and has been seductive ever since.↩
- Eddy Nahmias , Stephen Morris , Thomas Nadelhoffer & Jason Turner, 2005, “Surveying Freedom: Folk Intuitions about free will and moral responsibility,” Philosophical Psychology, 18, pp 561-584↩
- Given the ocean of evidence that people assess human abilities, including their abilities to do or choose otherwise, by methods that make no attempt to clamp conditions “precisely as they were,” overlooking this prospect has required nearly superhuman self-blinkering by incompatibilists. I consider Austin’s mistake to be the central core of the ongoing confusion about free will; if you look at the large and intricate philosophical literature about incompatibilism, you will see that just about everyone assumes, without argument, that it is not a mistake. Without that assumption the interminable discussions of van Inwagen’s “Consequence Argument” could not be formulated, for instance. The excellent article on “Arguments for Incompatibilism” in the online Stanford Encyclopedia of Philosophy, cites Austin’s essay but does not discuss this question. ↩
- Here more than anywhere else we can be grateful to Harris for his forthrightness, since the distinguished scientists who declare that free will is an illusion almost never have much if anything to say about how they think people should treat each other in the wake of their discovery. If they did, they would land in the difficulties Harris encounters. If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do. Preserving “law and order” without a concept of real responsibility is a daunting task. Harris at least recognizes his—dare I say?—responsibility to deal with this challenge. ↩
- “I’m writing a book on magic,” I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical, and supernatural powers. “No,” I answer: “Conjuring tricks, not real magic.” Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic. (p. 425) – Lee Siegel, Net of Magic↩
- Morse, “The Non-Problem of Free Will in Forensic Psychiatry and Psychology,” Behavioral Sciences and the Law, Vol. 25 (2007), pp. 203-220; Morse, “Determinism and the Death of Folk Psychology: Two Challenges to Responsibility from Neuroscience,” Minnesota Journal of Law, Science, and Technology, Vol. 9 (2008), pp. 1-36, at pp. 3-13.↩
- 2.2.2. Crusoe is given certain physical data (wants and commodities) and his task is to combine and apply them in such a fashion as to obtain a maximum resulting satisfaction. There can be no doubt that he controls exclusively all the variables upon which this result depends—say the allotting of resources, the determination of the uses of the same commodity for different wants, etc. Thus Crusoe faces an ordinary maximum problem, the difficulties of which are of a purely technical—and not conceptual—nature, as pointed out. 2.2.3. Consider now a participant in a social exchange economy. His problem has, of course, many elements in common with a maximum problem. But it also contains some, very essential, elements of an entirely different nature. He too tries to obtain an optimum result. But in order to achieve this, he must enter into relations of exchange with others. If two or more persons exchange goods with each other, then the result for each one will depend in general not merely upon his own actions but on those of the others as well. Thus each participant attempts to maximize a function (his above-mentioned “result”) of which he does not control all variables. This is certainly no maximum problem, but a peculiar and disconcerting mixture of several different maximum problems. Every participant is guided by another principle and neither determines all variables which affect his interest. This kind of problem is nowhere dealt with in classical mathematics. (Von Neumann and Morgenstern, pp10-11) ↩
- Apparently some thinkers have the idea that any justification of punishment is (by definition?) retributive. But this is a mistake; there are consequentialist justifications of the “retributive” ideas of just deserts and the mens rea requirement for guilt, for instance. Consider how one can defend the existence of the red card/yellow card distinction in soccer on purely consequentialist grounds. ↩
(Photo via Katinka Matson)
Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858-1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals. Why wait that long?
Ideas change, and the times we live in change. Perhaps the biggest change today is the rate of change. What established scientific idea is ready to be moved aside so that science can advance?
Our Narrow Definition of “Science”
Search your mind, or pay attention to the conversations you have with other people, and you will discover that there are no real boundaries between science and philosophy—or between those disciplines and any other that attempts to make valid claims about the world on the basis of evidence and logic. When such claims and their methods of verification admit of experiment and/or mathematical description, we tend to say that our concerns are “scientific”; when they relate to matters more abstract, or to the consistency of our thinking itself, we often say that we are being “philosophical”; when we merely want to know how people behaved in the past, we dub our interests “historical” or “journalistic”; and when a person’s commitment to evidence and logic grows dangerously thin or simply snaps under the burden of fear, wishful thinking, tribalism, or ecstasy, we recognize that he is being “religious.”
The boundaries between true intellectual disciplines are currently enforced by little more than university budgets and architecture. Is the Shroud of Turin a medieval forgery? This is a question of history, of course, and of archaeology, but the techniques of radiocarbon dating make it a question of chemistry and physics as well. The real distinction we should care about—the observation of which is the sine qua non of the scientific attitude—is between demanding good reasons for what one believes and being satisfied with bad ones.
The scientific attitude can handle whatever happens to be the case. Indeed, if the evidence for the inerrancy of the Bible and the resurrection of Jesus Christ were good, one could embrace the doctrine of fundamentalist Christianity scientifically. The problem, of course, is that the evidence is either terrible or nonexistent—hence the partition we have erected (in practice, never in principle) between science and religion.
Confusion on this point has spawned many strange ideas about the nature of human knowledge and the limits of “science.” People who fear the encroachment of the scientific attitude—especially those who insist upon the dignity of believing in one or another Iron Age god—will often make derogatory use of words such as materialism, neo-Darwinism, and reductionism, as if those doctrines had some necessary connection to science itself.
There are, of course, good reasons for scientists to be materialist, neo-Darwinian, and reductionist. However, science entails none of those commitments, nor do they entail one another. If there were evidence for dualism (immaterial souls, reincarnation), one could be a scientist without being a materialist. As it happens, the evidence here is extraordinarily thin, so virtually all scientists are materialists of some sort. If there were evidence against evolution by natural selection, one could be a scientific materialist without being a neo-Darwinist. But as it happens, the general framework put forward by Darwin is as well established as any other in science. If there were evidence that complex systems produced phenomena that cannot be understood in terms of their constituent parts, it would be possible to be a neo-Darwinist without being a reductionist. For all practical purposes, that is where most scientists find themselves, because every branch of science beyond physics must resort to concepts that cannot be understood merely in terms of particles and fields. Many of us have had “philosophical” debates about what to make of this explanatory impasse. Does the fact that we cannot predict the behavior of chickens or fledgling democracies on the basis of quantum mechanics mean that those higher-level phenomena are something other than their underlying physics? I would vote “no” here, but that doesn’t mean I envision a time when we will use only the nouns and verbs of physics to describe the world.
But even if one thinks that the human mind is entirely the product of physics, the reality of consciousness becomes no less wondrous, and the difference between happiness and suffering no less important. Nor does such a view suggest that we will ever find the emergence of mind from matter fully intelligible; consciousness may always seem like a miracle. In philosophical circles, this is known as “the hard problem of consciousness”—some of us agree that this problem exists, some of us don’t. Should consciousness prove conceptually irreducible, remaining the mysterious ground for all we can conceivably experience or value, the rest of the scientific worldview would remain perfectly intact.
The remedy for all this confusion is simple: We must abandon the idea that science is distinct from the rest of human rationality. When you are adhering to the highest standards of logic and evidence, you are thinking scientifically. And when you’re not, you’re not.
Read 170 other responses on Edge.org.
In 2010, John Brockman and the Edge Foundation held a conference entitled “The New Science of Morality.” I attended along with Roy Baumeister, Paul Bloom, Joshua D. Greene, Jonathan Haidt, Marc Hauser, Joshua Knobe, Elizabeth Phelps, and David Pizarro. Some of our conversations have now been published in a book (along with many interesting essays) entitled Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction
John Brockman and Harper Collins have given me permission to reprint my edited remarks here.
What I intended to say today has been pushed around a little bit by what has already been said and by a couple of sidebar conversations. That is as it should be, no doubt. But if my remarks are less linear than you would hope, blame that—and the jet lag.
I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors:
The first project is to understand what people do in the name of “morality.” We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a “science of morality.” This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating.
For most scientists, this project seems to exhaust all that legitimate points of contact between science and morality—that is, between science and judgments of good and evil and right and wrong. But I think there are two other projects that we could concern ourselves with, which are arguably more important.
The second project would be to actually get clearer about what we mean, and should mean, by the term “morality,” understanding how it relates to human well-being altogether, and to use this new discipline to think more intelligently about how to maximize human well-being. Of course, philosophers may think that this begs some of the important questions, and I’ll get back to that. But I think this is a distinct project, and it’s not purely descriptive. It’s a normative project. The question is, how can we think about moral truth in the context of science?
The third project is a project of persuasion: How can we persuade all of the people who are committed to silly and harmful things in the name of “morality” to change their commitments and to lead better lives? I think that this third project is actually the most important project facing humanity at this point in time. It subsumes everything else we could care about—from arresting climate change, to stopping nuclear proliferation, to curing cancer, to saving the whales. Any effort that requires that we collectively get our priorities straight and marshal our time and resources would fall within the scope of this project. To build a viable global civilization we must begin to converge on the same economic, political, and environmental goals.
Obviously the project of moral persuasion is very difficult—but it strikes me as especially difficult if you can’t figure out in what sense anyone could ever be right and wrong about questions of morality or about questions of human values. Understanding right and wrong in universal terms is Project Two, and that’s what I’m focused on.
There are impediments to thinking about Project Two: the main one being that most right-thinking, well-educated, and well-intentioned people—certainly most scientists and public intellectuals, and I would guess, most journalists—have been convinced that something in the last 200 years of intellectual progress has made it impossible to actually speak about “moral truth.” Not because human experience is so difficult to study or the brain too complex, but because there is thought to be no intellectual basis from which to say that anyone is ever right or wrong about questions of good and evil.
My aim is to undermine this assumption, which is now the received opinion in science and philosophy. I think it is based on several fallacies and double standards and, frankly, on some bad philosophy. The first thing I should point out is that, apart from being untrue, this view has consequences.
In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said that it couldn’t be done—for this would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.
But, of course, it has long been obvious that we need to converge, as a global civilization, in our beliefs about how we should treat one another. For this, we need some universal conception of right and wrong. So in addition to just not being true, I think skepticism about moral truth actually has consequences that we really should worry about.
Definitions matter. And in science we are always in the business of framing conversations and making definitions. There is nothing about this process that condemns us to epistemological relativism or that nullifies truth claims. We define “physics” as, loosely speaking, our best effort to understand the behavior of matter and energy in the universe. The discipline is defined with respect to the goal of understanding how matter behaves.
Of course, anyone is free to define “physics” in some other way. A Creationist physicist could come into this room and say, “Well, that’s not my definition of physics. My physics is designed to match the Book of Genesis.” But we are free to respond to such a person by saying, “You know, you really don’t belong at this conference. That’s not ‘physics’ as we are interested in it. You’re using the word differently. You’re not playing our language game.” Such a gesture of exclusion is both legitimate and necessary. The fact that the discourse of physics is not sufficient to silence such a person, the fact that he cannot be brought into our conversation and subdued on our terms, does not undermine physics as a domain of objective truth.
And yet, on the subject of morality, we seem to think that the possibility of differing opinions is a deal breaker. The fact that someone can come forward and say that his morality has nothing to do with human flourishing—that it depends upon following shariah law, for instance—the very fact that such a position can be articulated proves that there’s no such thing as moral truth. Morality, therefore, must be a human invention. But this is a fallacy.
We have an intuitive physics, but much of our intuitive physics is wrong with respect to the goal of understanding how matter and energy behave in this universe. I am saying that we also have an intuitive morality, and much of our intuitive morality may be wrong with respect to the goal of maximizing human flourishing—and with reference to the facts that govern the well-being of conscious creatures, generally.
So I will argue, briefly, that the only sphere of legitimate moral concern is the well-being of conscious creatures. I’ll say a few words in defense of this assertion, but I think the idea that it has to be defended is the product of several fallacies and double standards that we’re not noticing. I don’t know that I will have time to expose all of them, but I’ll mention a few.
Thus far, I’ve introduced two things: the concept of consciousness and the concept of well-being. I am claiming that consciousness is the only context in which we can talk about morality and human values. Why is consciousness not an arbitrary starting point? Well, what’s the alternative? Just imagine someone coming forward claiming to have some other source of value that has nothing to do with the actual or potential experience of conscious beings. Whatever this is, it must be something that cannot affect the experience of anything in the universe, in this life or in any other.
If you put this imagined source of value in a box, I think what you would have in that box would be—by definition—the least interesting thing in the universe. It would be—again, by definition—something that cannot be cared about. Any other source of value will have some relationship to the experience of conscious beings. So I don’t think consciousness is an arbitrary starting point. When we’re talking about right and wrong, and good and evil, and about outcomes that matter, we are necessarily talking about actual or potential changes in conscious experience.
I would further add that the concept of “well-being” captures everything we can care about in the moral sphere. The challenge is to have a definition of well-being that is truly open-ended and can absorb everything we care about. This is why I tend not to call myself a “consequentialist” or a “utilitarian,” because traditionally, these positions have bounded the notion of consequences in such a way as to make them seem very brittle and exclusive of other concerns—producing a kind of body count calculus that only someone with Asperger’s could adopt.
Consider the Trolley Problem: If there just is, in fact, a difference between pushing a person onto the tracks and flipping a switch—perhaps in terms of the emotional consequences of performing these actions—well, then this difference has to be taken into account. Or consider Peter Singer’s Shallow Pond problem: We all know that it would take a very different kind of person to walk past a child drowning in a shallow pond, out of concern for getting his suit wet, than it takes to ignore an appeal from UNICEF. It says much more about you if you can walk past that pond. If we were all this sort of person, there would be terrible ramifications as far as the eye can see. It seems to me, therefore, that the challenge is to get clear about what the actual consequences of an action are, about what changes in human experience are possible, and about which changes matter.
In thinking about a universal framework for morality, I now think in terms of what I call a “moral landscape.” Perhaps there is a place in hell for anyone who would repurpose a cliché in this way, but the phrase, “the moral landscape” actually captures what I’m after: I’m envisioning a space of peaks and valleys, where the peaks correspond to the heights of flourishing possible for any conscious system, and the valleys correspond to the deepest depths of misery.
To speak specifically of human beings for the moment: any change that can affect a change in human consciousness would lead to a translation across the moral landscape. So changes to our genome, and changes to our economic systems—and changes occurring on any level in between that can affect human well-being for good or for ill—would translate into movements within this space of possible human experience.
A few interesting things drop out of this model: Clearly, it is possible, or even likely, that there are many peaks on the moral landscape. To speak specifically of human communities: perhaps there is a way to maximize human flourishing in which we follow Peter Singer as far as we can go, and somehow train ourselves to be truly dispassionate to friends and family, without weighting our children’s welfare more than the welfare of other children, and perhaps there’s another peak where we remain biased toward our own children, within certain limits, while correcting for this bias by creating a social system which is, in fact, fair. Perhaps there are a thousand different ways to tune the variable of selfishness versus altruism, to land us on a peak on the moral landscape.
However, there will be many more ways to not be on a peak. And it is clearly possible to be wrong about how to move from our present position to the nearest available peak. This follows directly from the observation that whatever conscious experiences are possible for us are a product of the way the universe is. Our conscious experience arises out of the laws of nature, the states of our brain, and our entanglement with the world. Therefore, there are right and wrong answers to the question of how to maximize human flourishing in any moment.
This becomes incredibly easy to see when we imagine there being only two people on earth: we can call them Adam and Eve. Ask yourself, are there right and wrong answers to the question of how Adam and Eve might maximize their well-being? Clearly there are. Wrong answer number one: they can smash each other in the face with a large rock. This will not be the best strategy to maximize their well-being.
Of course, there are zero sum games they could play. And yes, they could be psychopaths who might utterly fail to collaborate. But, clearly, the best responses to their circumstance will not be zero-sum. The prospects of their flourishing and finding deeper and more durable sources of satisfaction will only be exposed by some form of cooperation. And all the worries that people normally bring to these discussions—like deontological principles or a Rawlsian concern about fairness—can be considered in the context of our asking how Adam and Eve can navigate the space of possible experiences so as to find a genuine peak of human flourishing, regardless of whether it is the only peak. Once again, multiple, equivalent but incompatible peaks still allow for a realistic space in which there are right and wrong answers to moral questions.
One thing we must not get confused about is the difference between answers in practice and answers in principle. Needless to say, fully understanding the possible range of experiences available to Adam and Eve represents a fantastically complicated problem. And it gets more complicated when we add another 7 billion people to the experiment. But I would argue that it’s not a different problem; it just gets more complicated.
By analogy, consider economics: Is economics a science yet? Apparently not, judging from the last few years. Maybe economics will never get better than it is now. Perhaps we’ll be surprised every decade or so by something terrible, and we’ll be forced to concede that we’re blinded by the complexity of our situation. But to say that it is difficult or impossible to answer certain problems in practice does not even slightly suggest that there are no right and wrong answers to these problems in principle.
The complexity of economics would never tempt us to say that there are no right and wrong ways to design economic systems, or to respond to financial crises. Nobody will ever say that it’s a form of bigotry to criticize another country’s response to a banking failure. Just imagine how terrifying it would be if the smartest people around all more or less agreed that we had to be nonjudgmental about everyone’s view of economics and about every possible response to a global economic crisis.
And yet that is exactly where we stand as an intellectual community on the most important questions in human life. I don’t think you have enjoyed the life of the mind until you have witnessed a philosopher or scientist talking about the “contextual legitimacy” of the burka, or of female genital excision, or any of these other barbaric practices that we know cause needless human misery. We have convinced ourselves that somehow science is by definition a value-free space and that we can’t make value judgments about beliefs and practices that needlessly derail our attempts to build happy and sane societies.
The truth is, science is not value-free. Good science is the product of our valuing evidence, logical consistency, parsimony, and other intellectual virtues. And if you don’t value those things, you can’t participate in the scientific conversation. I’m saying we need not worry about the people who don’t value human flourishing, or who say they don’t. We need not listen to people who come to the table saying, “You know, we want to cut the heads off adulterers at half-time at our soccer games because we have a book dictated by the Creator of the universe which says we should.” In response, we are free to say, “Well, you appear to be confused about everything. Your “physics” isn’t physics, and your “morality” isn’t morality.” These are equivalent moves, intellectually speaking. They are borne of the same entanglement with real facts about the way the universe is. In terms of morality, our conversation can proceed with reference to facts about the changing experiences of conscious creatures. It seems to me to be just as legitimate, scientifically, to define “morality” in this way as it is to define “physics” in terms of the behavior of matter and energy. But most people engaged in of the scientific study of morality don’t seem to realize this.
From the publisher: Daniel Kahneman on the power (and pitfalls) of human intuition and “unconscious” thinking • Daniel Gilbert on desire, prediction, and why getting what we want doesn’t always make us happy • Nassim Nicholas Taleb on the limitations of statistics in guiding decision-making • Vilayanur Ramachandran on the scientific underpinnings of human nature • Simon Baron-Cohen on the startling effects of testosterone on the brain • Daniel C. Dennett on decoding the architecture of the “normal” human mind • Sarah-Jayne Blakemore on mental disorders and the crucial developmental phase of adolescence • Jonathan Haidt, Sam Harris, and Roy Baumeister on the science of morality, ethics, and the emerging synthesis of evolutionary and biological thinking • Gerd Gigerenzer on rationality and what informs our choices
Peter Boghossian is a full-time faculty member in the philosophy department at Portland State University. He is also a national speaker for the Center for Inquiry, the Secular Student Alliance, and the Richard Dawkins Foundation for Reason and Science.
Peter was kind enough to answer a few questions about his new book, A Manual for Creating Atheists.
1. What was your goal in writing A Manual for Creating Atheists?
My primary goal was to give readers the tools to talk people out of faith and into reason.
2. How do you help readers accomplish this?
Almost everyone can relate to having had conversations with friends, family, coworkers, where you are left shaking your head and wondering how in the world they can believe what they believe—conversations where they fully and uniformly dismiss every fact and piece of evidence presented to them. So the core piece of advice I give may at first sound counterintuitive, but it is simple: When speaking with people who hold beliefs based on faith, don’t get into a debate about facts or evidence or even their specific beliefs. Rather, get them to question the manner in which they’ve reached their beliefs—that is, get them to question the value of faith in appraising the world. Once they question the value of faith, all the unevidenced and unreasoned beliefs will inevitably collapse on their own. In that sense, the book is really about getting people to think critically—the atheism part is just a by-product. So my hope is that people won’t just read A Manual for Creating Atheists—they’ll act on it and put it to use. It’s a tool, and like any tool, it does no good unless it’s used.
The book draws from multiple domains of study—philosophy, psychology, cognitive neuroscience, psychotherapy, history, apologetics, even criminal justice and addiction medicine—and focuses principally on research designed to change the behavior of people who don’t think they have a problem and don’t want their behavior changed. This vast body of peer-reviewed literature forms the basis of the book, but the book also stems in large part from my own decades-long work using and teaching these techniques in prisons, colleges, seminaries, and hospitals, and even on the streets, where I’ve honed and revised them, improved upon what’s worked, and discarded what hasn’t. The result is a book that will get the reader quickly up to speed—through step-by-step guides and conversational templates—on all the academically grounded, street-tested techniques and tools required for talking people out of faith and superstition and into reason.
3. What is the most common logical error religious people make in their arguments for the existence of God?
Confirmation bias—although I think it’s less that they’re using a logical fallacy and more that the entire way they’ve conceptualized the problem is fallacious. In other words, they’ve started with their conclusion and reasoned backward from that conclusion. They’ve started with the idea not only that God exists but that a very specific God exists—and they’ve asked themselves how they know this is true. They’ve put their metaphysics before their epistemology.
4. Perhaps you should spell out what you mean by “epistemology” and why you think it’s important.
“Epistemology” basically means how one knows what one knows. In the context of a faith-based intervention, one can also look at epistemology as a belief-forming mechanism.
A key principle in helping people abandon faith and embrace reason is to focus on how one acquires knowledge. As I said, interventions should not target conclusions that someone holds, or specific beliefs, but the processes used to form beliefs. God, for example, is a conclusion arrived at as the result of a faulty epistemology.
For too long we’ve misidentified the problem. We’ve conceptualized it in terms of conclusions people hold, not mechanisms of belief formation they use. I’m advocating that we reconceptualize the problem of faith, God, and religion (and virtually every other instance of insufficiently evidenced belief) in terms of epistemology—that is, in terms of how people come to know what they think they know.
5. What do you consider to be the core commitments of a healthy epistemology?
1) An understanding that the way to improve the human condition is through reason, rationality, and science. Consequently, the words “reason” and “hope” would be forever wedded, as would the words “faith” and “despair.”
2) The willingness to revise one’s beliefs.
3) Saying “I don’t know” when one doesn’t know.
6. Do you think that the forces of reason are winning?
Yes. I think they are winning, and I think they will prevail.
Up to now, most atheists have simply criticized religion in various ways, but the point is to dispel it. In A Manual For Creating Atheists, Peter Boghossian fills that gap, telling the reader how to become a ‘street epistemologist’ with the skills to attack religion at its weakest point: its reliance on faith rather than evidence. This book is essential for nonbelievers who want to do more than just carp about religion, but want to weaken its odious grasp on the world.
—Jerry Coyne, author of Why Evolution is True
Dr. Peter Boghossian’s A Manual for Creating Atheists is a precise, passionate, compassionate and brilliantly reasoned work that will illuminate any and all minds capable of openness and curiosity. This is not a bedtime story to help you fall asleep, but a wakeup call that has the best chance of bringing your rational mind back to life.
—Stefan Molyneux, host of Freedomain Radio, the largest and most popular philosophy show on the web
A book so great you can skip it and just read the footnotes. Pure genius.
—Christopher Johnson, co-founder, The Onion
(Photo via Shutterstock)
Last Christmas, my friends Mark and Jessica spent the morning opening presents with their daughter, Rachel, who had just turned four. After a few hours of excitement, feelings of holiday lethargy and boredom descended on the family—until Mark suddenly had a brilliant idea for how they could have a lot more fun.
Jessica was reading on the couch while Rachel played with her new dolls on the living room carpet.
“Rachel,” Mark said, “I need to tell you something very important… You can’t keep any of these toys. Mommy and I have decided to give them away to the other kids at your school.”
A look of confusion came over his daughter’s face. Mark caught Jessica’s eye. She recognized his intentions at once and was now struggling to contain her glee. She reached for their new video camera.
“You’ve had these toys long enough, don’t you think, Sweetie?”
“No, Daddy! These are my Christmas presents.”
“Not anymore. It’s time to say good-bye…”
Mark began gathering her new toys and putting them in a trash bag.
“They’re only toys, Rachel. Time to grow up!”
“Not my Polly Pockets! Not my Polly Pockets!”
The look of terror on his daughter’s face was too funny for words. Mark could barely speak. He heard Jessica struggling to stifle a laugh as she stepped around the couch with the camera so that she could capture all the action from the front. Mark knew that if he made eye contact with his wife, he would be lost.
“These Polly Pockets belong to another little girl now… She’s going to love them!”
That did the trick. His daughter couldn’t have produced a louder howl of pain had he smashed her knee with a hammer. Luckily, Jessica caught the moment close-up—her daughter’s hot tears of rage and panic nearly wet the lens.
Mark and Jessica immediately posted the footage of Rachel’s agony to YouTube, where 24 million people have now seen it. This has won them some small measure of fame, which makes them very happy.
No doubt, you will be relieved to learn that Mark, Jessica, and Rachel do not exist. In fact, I am confident that no one I know would treat their child this way. But this leaves me at a loss to explain the popularity of a morally identical stunt engineered for three years running by Jimmy Kimmel:
As you watch the above video and listen to the laughter of Kimmel’s studio audience, do your best to see the world from the perspective of these unhappy children. Admittedly, this can be difficult. Despite my feelings of horror over the whole project, a few of these kids made me laugh as well—some of them are just so adorably resilient in the face of parental injustice. However, I am convinced that anyone who takes pleasure in all this exploited cuteness is morally confused. Yes, we know that these kids will get their candy back in the end. But the kids themselves don’t know it, and the betrayal they feel is heartbreakingly genuine. This is no way to treat children.
It is true that the tears of a child must often be taken less seriously than those of an adult—because they come so freely. To judge from my daughter’s reaction in the moment, getting vaccinated against tetanus is every bit as bad as getting the disease. All parents correct for this distortion of reality—and should—so that they can raise their kids without worrying at every turn that they are heaping further torments upon the damned. Nevertheless, I am astonished at the percentage of people who find the Kimmel videos morally unproblematic. When I expressed my concern on Twitter, I received the following defenses of Kimmel and these misguided parents:
• People have to learn how to take a joke.
• It’s only candy. Kids need to realize that it doesn’t matter.
• Kids must be prepared for the real world, and pranks like this help prepare them. Now they know to take what authority figures say with a grain of salt.
• They won’t remember any of this when they are older—so there can’t be any lasting harm.
These responses are callous and crazy. A four-year-old cannot possibly learn that candy “doesn’t matter”—in fact, many adults can’t seem to learn this. But he can learn that his parents will lie to him for the purpose of making him miserable. He can also learn that they will find his suffering hilarious and that, at any moment, he might be shamed by those closest to him. True, he may not remember learning these lessons explicitly—unless he happens to watch the footage on YouTube as it surpasses a billion views—but he will, nevertheless, be a person who was raised by parents who played recklessly with his trust. It amazes me that people think the stakes in these videos are low.
My daughter is nearly five, and I can recall lying to her only once. We were looking for nursery rhymes on the Internet and landed on a page that showed a 16th-century woodcut of a person being decapitated. As I was hurriedly scrolling elsewhere, she demanded to know what we had just seen. I said something silly like “That was an old and very impractical form of surgery.” This left her suitably perplexed, and she remains unaware of man’s inhumanity to man to this day. However, I doubt that even this lie was necessary. I just wasn’t thinking very fast on my feet.
As parents, we must maintain our children’s trust—and the easiest way to lose it is by lying to them. Of course, we should communicate the truth in ways they can handle—and this often demands that we suppress details that would be confusing or needlessly disturbing. An important difference between children and (normal) adults is that children are not fully capable of conceiving of (much less looking out for) their real interests. Consequently, it might be necessary in some situations to pacify or motivate them with a lie. In my experience, however, such circumstances almost never arise.
Many people imagine that it is necessary to lie to children to make them feel good about themselves. But this makes little moral or intellectual sense. Especially with young children, the purpose of praise is to encourage them to try new things and enjoy themselves in the process. It isn’t a matter of evaluating their performance by reference to some external standard. The truth communicated by saying “That’s amazing” or “I love it” in response to a child’s drawing is never difficult to find or feel. Of course, things change when one is talking to an adult who wants to know how his work compares with the work of others. Here, we do our friends no favors by lying to them.
Strangely, the most common question I’ve received from readers on the topic of deception has been some version of the following:
What should we tell our children about Santa? My daughter asked if Santa was real the other day, and I couldn’t bear to disappoint her.
In fact, I’ve heard from several readers who seemed to anticipate this question, and who wrote to tell me how disturbed they had been when they learned that their parents had lied to them every Christmas. I’ve also heard from readers whose parents told the truth about Santa simply because they didn’t want the inevitable unraveling of the Christmas myth to cast any doubt on the divinity of Jesus Christ. I suppose some ironies are harder to detect than others.
I don’t remember whether I ever believed in Santa, but I was never tempted to tell my daughter that he was real. Christmas must be marginally more exciting for children who are duped about Santa—but something similar could be said of many phenomena about which no one is tempted to lie. Why not insist that dragons, mermaids, fairies, and Superman actually exist? Why not present the work of Tolkien and Rowling as history?
The real truth—which everyone knows 364 days of the year—is that fiction can be both meaningful and fun. Children have fantasy lives so rich and combustible that rigging them with lies is like putting a propeller on a rocket. And is the last child in class who still believes in Santa really grateful to have his first lesson in epistemology meted out by his fellow six-year-olds? If you deceive your children about Santa, you may give them a more thrilling experience of Christmas. What you probably won’t give them, however, is the sense that you would not and could not lie to them about anything else.
We live in a culture where the corrosive effect of lying is generally overlooked, and where people remain confused about the difference between truly harmless deceptions—such as the poetic license I took at the beginning of this article—and seemingly tiny lies that damage trust. I’ve written a short book about this. Its purpose is to convey, in less than an hour, one of the most significant ethical lessons I’ve ever learned: If you want to improve yourself and the people around you, you need only stop lying.
Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is a past president of the Society for Philosophy and Psychology and a co-editor of Behavioral and Brain Sciences, one of the major journals in the field. Dr. Bloom has written for scientific journals such as Nature and Science and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic. He is the author or editor of six books, including Just Babies: The Origins of Good and Evil.
Paul was kind enough to answer a few questions about his new book.
Harris: What are the greatest misconceptions people have about the origins of morality?
Bloom: The most common misconception is that morality is a human invention. It’s like agriculture and writing, something that humans invented at some point in history. From this perspective, babies start off as entirely self-interested beings—little psychopaths—and only gradually come to appreciate, through exposure to parents and schools and church and television, moral notions such as the wrongness of harming another person.
Now, this perspective is not entirely wrong. Certainly some morality is learned; this has to be the case because moral ideals differ across societies. Nobody is born with the belief that sexism is wrong (a moral belief that you and I share) or that blasphemy should be punished by death (a moral belief that you and I reject). Such views are the product of culture and society. They aren’t in the genes.
But the argument I make in Just Babies is that there also exist hardwired moral universals—moral principles that we all possess. And even those aspects of morality—such as the evils of sexism—that vary across cultures are ultimately grounded in these moral foundations.
A very different misconception sometimes arises, often stemming from a religious or spiritual outlook. It’s that we start off as Noble Savages, as fundamentally good and moral beings. From this perspective, society and government and culture are corrupting influences, blotting out and overriding our natural and innate kindness.
This, too, is mistaken. We do have a moral core, but it is limited—Hobbes was closer to the truth than Rousseau. Relative to an adult, your typical toddler is selfish, parochial, and bigoted. I like the way Kingsley Amis once put it: “It was no wonder that people were so horrible when they started life as children.” Morality begins with the genes, but it doesn’t end there.
Harris: How do you distinguish between the contributions of biology and those of culture?
Bloom: There is a lot you can learn about the mind from studying the fruit flies of psychological research—college undergraduates. But if you want to disentangle biology and culture, you need to look at other populations. One obvious direction is to study individuals from diverse cultures. If it turns out that some behavior or inclination shows up only in so-called WEIRD (Western Educated Industrial Rich Democratic) societies, it’s unlikely to be a biological adaptation. For instance, a few years ago researchers were captivated by the fact that subjects in the United States and Switzerland are highly altruistic and highly moral when playing economic games. They assumed that this reflects the workings of some sort of evolved module—only to discover that people in the rest of the world behave quite differently, and that their initial findings are better explained as a quirk of certain modern societies.
One can do comparative research—if a human capacity is shared with other apes, then its origin is best explained in terms of biology, not culture. And there’s a lot of fascinating research with apes and monkeys that’s designed to address questions about the origin of pro-social behavior.
Then there’s baby research. We can learn a lot about human nature by looking at individuals before they are exposed to school, television, religious institutions, and the like. The powerful capacities that we and other researchers find in babies are strong evidence for the contribution of biology. Now, even babies have some life history, and it’s possible that very early experience, perhaps even in the womb, plays some role in the origin of these capacities. I’m comfortable with this—my claim in Just Babies isn’t that the moral capacities of babies emerge without any interaction with the environment. That would be nuts. Rather, my claim is the standard nativist one: These moral capacities are not acquired through learning.
We should also keep in mind that failure to find some capacity in a baby does not show that it is the product of culture. For one thing, the capacity might be present in the baby’s mind but psychologists might not be clever enough to detect it. In the immortal words of Donald Rumsfeld, “Absence of evidence is not evidence of absence.” Furthermore, some psychological systems that are pretty plainly biological adaptations might emerge late in development—think about the onset of disgust at roughly the age of four, or the powerful sexual desires that emerge around the time of puberty. Developmental research is a useful tool for pulling apart biology and culture, but it’s not a magic bullet.
Harris: What are the implications of our discovering that many moral norms emerge very early in life?
Bloom: Some people think that once we know what the innate moral system is, we’ll know how to live our lives. For them it’s as if the baby’s mind contains a holy text of moral wisdom, written by Darwin instead of Yahweh, and once we can read it, all ethical problems will be solved.
This seems unlikely. Mature moral decision-making involves complex reasoning, and often the right thing to do involves overriding our gut feelings, including those that are hardwired. And some moral insights, such as the wrongness of slavery, are surely not in our genes.
But I do think that this developmental work has some interesting implications. For one thing, the argument in Just Babies is that, to a great extent, all people have the same morality. The differences that we see—however important they are to our everyday lives—are variations on a theme. This universality provides some reason for optimism. It suggests that if we look hard enough, we can find common ground with any other neurologically normal human, and that has to be good news.
Just Babies is optimistic in another way. The zeitgeist in modern psychology is pro-emotion and anti-reason. Prominent writers and intellectuals such as David Brooks, Malcolm Gladwell, and Jonathan Haidt have championed the view that, as David Hume famously put it, we are slaves of the passions. From this perspective, moral judgments and moral actions are driven mostly by gut feelings—rational thought has little to do with it.
That’s a grim view of human nature. If it were true, we should buck up and learn to live with it. But I argue in Just Babies that it’s not true. It is refuted by everyday experience, by history, and by the science of developmental psychology. Rational deliberation is part of our everyday lives, and, as many have argued—including Steven Pinker, Peter Singer, Joshua Greene, you, and me, in the final chapter of in Just Babies—it is a powerful force in driving moral progress.
Harris: When you talk about moral progress, it implies that some moralities are better than others. Do you think, then, that it is legitimate to say that certain individuals or cultures have the wrong morality?
Bloom: If humans were infinitely plastic, with no universal desires, goals, or moral principles, the answer would have to be no. But it turns out that we have deep commonalities, and so, yes, we can talk meaningfully about some moralities’ being better than others.
Consider a culture in which some minority is kept as slaves—tortured, raped, abused, bought and sold, and so on—and this practice is thought of by the majority as a moral arrangement. Perhaps it’s justified by reference to divine command, or the demands of respected authorities, or long-standing tradition. I think we’re entirely justified in arguing that they are wrong, and when we do this, we’re not merely saying “We like our way better.” Rather, we can argue that it’s wrong by pointing out that it’s wrong even for them—the majority who benefit from the practice.
Obstetricians used to deliver babies without washing their hands, and many mothers and babies died as a result. They were doing it wrong—wrong by their own standards, because obstetricians wanted to deliver babies, not kill them. Similarly, given that the humans in the slave society possess certain values and intuitions and priorities, they are acting immorally by their own lights, and they would appreciate this if they were exposed to certain arguments and certain facts.
Now, this is an empirical claim, drawing on assumptions about human psychology, but it’s supported by history. Good moral ideas can spread through the world in much the same way that good scientific ideas can, and once they are established, people marvel that they could ever have thought differently. Americans are no more likely to reinstate slavery than we are to give up on hand-washing for doctors.
You’ve written extensively on these issues in The Moral Landscape and elsewhere, and since we agree on so much, I can’t resist sounding a note of gentle conflict. Your argument is that morality is about maximizing the well-being of conscious minds. This means that determining the best moral system reduces to the empirical/scientific question of what system best succeeds at this goal. From this standpoint, we can reject a slave society for precisely the same reason we can reject a dirty-handed-obstetrician society—it involves needless human pain.
My view is slightly different. You’re certainly right that maximizing well-being is something we value, and needless suffering is plainly a bad thing. But there remain a lot of hard questions—the sort that show up in Ethics 101 and never go away. Are we aspiring for the maximum total amount of individual well-being or the highest average? Are principles of fairness and equality relevant? What if the slave society has very few unhappy slaves and very many happy slaveholders, so its citizens are, in total and on average, more fulfilled than ours? Is that society more moral? If my child needs an operation to save his sight, am I a better person if I let him go blind and send the money to a charity where it will save another child’s life? These are hard questions, and they don’t go away if we have a complete understanding of the empirical facts.
The source of these difficulties, I think, is that as reflective moral beings, we sometimes have conflicting intuitions as to what counts as morally good. If we were natural-born utilitarians of the Benthamite sort, then determining the best possible moral world really would be a straightforward empirical problem. But we aren’t, and so it isn’t.
Harris: Well, it won’t surprise you to learn that I agree with everything you’ve said up until this last bit. In fact, these last points illustrate why I choose not to follow the traditional lines laid down by academic philosophers. If you declare that you are a “utilitarian,” everyone who has taken Ethics 101, as you say, imagines that he understands the limits of your view. Unfortunately, those limits have been introduced by philosophers themselves and are enshrined in the way that we have been encouraged to talk about moral philosophy.
For instance, you suggest that a concern for well-being might be opposed to a concern for fairness and equality—but fairness and equality are immensely important precisely because they are so good at safeguarding the well-being of people who have competing interests. If someone says that fairness and equality are important for reasons that have nothing to do with the well-being of people, I have no idea what he is talking about.
Similarly, you suggest that the hard questions of ethics wouldn’t go away if we had a complete understanding of empirical facts. But we really must pause to appreciate just how unimaginably different things would be IF we had such an understanding. This kind of omniscience is probably impossible—but nothing in my account depends on its being possible in practice. All we need to establish a strong, scientific conception of moral truth in principle is to admit that there is a landscape of experiences that conscious beings like ourselves can have, both individually and collectively—and that some are better than others (in any and every sense of “better”). Must we really defend the proposition that an experience of effortless good humor, serenity, love, creativity, and awe spread over all possible minds would be better than everyone’s being flayed alive in a dungeon by unhappy devils? I don’t think so.
I agree that how we think about collective well-being presents certain difficulties (average vs. maximum, for instance)—but a strong conception of moral truth requires only that we acknowledge the extremes. It seems to me that the paradoxes that Derek Parfit has engineered here, while ingenious, need no more impede our progress toward increased well-being than the paradoxes of Zeno prevent us from getting to the coffee pot each morning. I admit that it can be difficult to say whether a society of unhappy egalitarians would be better or worse than one composed of happy slaveholders and none-too-miserable slaves. And if we tuned things just right, I would be forced to say that these societies are morally equivalent. However, one thing is not debatable (and it is all that my thesis as presented in The Moral Landscape requires): If you took either of these societies and increased the well-being of everyone, you would be making a change for the good. If, for instance, the slaveholders invented machines that could replace the drudgery of slaves, and the slaves themselves became happy machine owners—and these changes introduced no negative consequences that canceled the moral gains—this would be an improvement in moral terms. And any person who later attempted to destroy the machines and begin enslaving his neighbors would be acting immorally.
Again, the changes in well-being that are possible for creatures like ourselves are possible whether or not anyone knows about them, and their possibility depends in some way on the laws that govern the states of conscious minds in this universe (or any other).
Whatever its roots in our biology, I think we should now view morality as a navigation problem: How can we (or any other conscious system) reduce suffering and increase happiness? There might be an uncountable number of morally equivalent peaks and valleys on the landscape—but that wouldn’t undermine the claim that basking on some peak is better than being tortured in one of the valleys. Nor would it suggest that movement up or down depends on something other than the laws of nature.
Bloom: I agree with almost all of this. Sure—needless suffering is a bad thing, and increased well-being is a good thing, and that’s why I’m comfortable saying that some societies (and some individuals) have better moralities than others. I agree as well that determining the right moral system will rest in part on knowing the facts. This is true for the extremes, and it’s also true for real-world cases. The morality of drug laws in the United States, for instance, surely has a lot to do with whether those laws cause an increase or a decrease in human suffering.
My point was that there are certain moral problems that don’t seem to be solvable by science. You accept this but think that these are like paradoxes of metaphysics—philosophical puzzles with little practical relevance.
This is where we clash, because some of these moral problems keep me up at night. Take the problem of how much I should favor my own children. I spend money to improve my sons’ well-being—buying them books, taking them on vacations, paying dentists to fix their teeth, etc.—that could instead be used to save the lives of children in poor countries. I don’t need a neuroscientist to tell me that I’m not acting to increase the total well-being of conscious individuals. Am I doing wrong? Maybe so. But would you recommend the alternative, where (to use my earlier example) I let my son go blind so that I can send the money I would have paid for the operation to Oxfam so that another child can live? This seems grotesque. So what’s the right balance? How should we weigh the bonds of family, friendship, and community?
This is a serious problem of everyday life, and it’s not going to be solved by science.
Harris: Actually, I don’t think our views differ much. This just happens to be a place where we need to distinguish between answers in practice and answers in principle. I completely agree that there are important ethical problems that we might never solve. I also agree that there are circumstances in which we tend to act selfishly to a degree that beggars any conceivable philosophical justification. We are, therefore, not as moral as we might be. Is this really a surprise? As you know, the forces that rule us here are largely situational: It is one thing for you to toss an appeal from the Red Cross in the trash on your way to the ice cream store. It would be another for you to step over the prostrate bodies of starving children. You know such children exist, of course, and yet they are out of sight and (generally) out of mind. Few people would counsel you to let your own children go blind, but I can well imagine Peter Singer’s saying that you should deprive them of every luxury as long as other children are deprived of food. To understand the consequences of doing this, we would really need to take all the consequences into account.
I briefly discuss this problem in The Moral Landscape. I suspect that some degree of bias toward one’s own offspring could be normative in that it will tend to lead to better outcomes for everyone. Communism, many have noticed, appears to run so counter to human nature as to be more or less unworkable. But the crucial point is that we could be wrong about this—and we would be wrong with reference to empirical facts that we may never fully discover. To say that these answers will not be found through science is merely to say that they won’t be established with any degree of certainty or precision. But that is not to say that such answers do not exist. It is also possible to know exactly what we should do but to not be sufficiently motivated to do it. We often find ourselves in this situation in life. For example, a person desperately wants to lose weight and knows that he would be happier if he did. He also knows how to do it—by eating less junk and exercising more. And yet he may spend his whole life not doing what he knows would be good for him. In many respects, I think our morality suffers from this kind of lassitude.
But we can achieve something approaching moral certainty for the easy cases. As you know, many academics and intellectuals deny this. You and I are surrounded by highly educated and otherwise intelligent people who believe that opposition to the burqa is merely a symptom of Western provincialism. I think we agree that this kind of moral relativism rests on some very dubious (and unacknowledged) assumptions about the nature of morality and the limits of science. Let us go out on a scientific limb together: Forcing half the population to live inside cloth bags isn’t the best way to maximize individual or collective well-being. On the surface, this is a rather modest ethical claim. When we look at the details, however, we find that it is really a patchwork of claims about psychology, sociology, economics, and probably several other scientific disciplines. In fact, the moment we admit that we know anything at all about human well-being, we find that we cannot talk about moral truth outside the context of science. Granted, the scientific details may be merely implicit, or may remain perpetually out of reach. But we are talking about the nature of human minds all the same.
Bloom: We still have more to talk about regarding the hard cases, but I agree with you that there are moral truths and that we can learn about them, at least in part, through science. Part of the program of doing so is understanding human nature, and especially our universal moral sense, and this is what my research, and my new book, is all about.
I’ve noticed a happy trend in online video: People have begun to produce animations and mashups of public lectures that add considerable value to the spoken words. If you are unfamiliar with these visual essays, watch any of the RSA Animate videos, like the one below:
People have also taken excerpts from my own lectures and combined them with stock footage. For example:
I would like to encourage this behavior. To that end, I am offering the following voice-over tracks, adapted from one of my debates.
The first is an edited excerpt from the debate itself:
The second is a reading of a similar text:
These audio files are yours to use any way you see fit. And if you produce something especially creative, I will do my best to bring attention to your work.
Watch the above video. (Then watch it again.) And then read the (unedited and uncorrected) description of this footage written by the organizers of this Muslim “peace conference”:
When Muslim organizations invite Shaykhs who speak openly about the values of Islam, the Islamophobic western media starts murdering the character of that organization and the invited speaker. The question these Islamophobic journalists need to reflect upon is; are these so called ‘‘radical’’ views that they criticize endorsed only by these few individuals being invited around the globe, or does the common Muslims believe in them. If the common Muslims believe in these values that means that more or less all Muslims are radical and that Islam is a radical religion. Since this is not the case, as Islam is a peaceful religion and so are the masses of common Muslims, these Shaykhs cannot be radical. Rather it is Islamophobia from the ignorant western media who is more concerned about making money by alienating Islam by presenting Muslims in this way. Islam Net, an organization in Norway, invited 9 speakers to Peace Conference Scandinavia 2013. These speakers would most likely be labelled as ‘‘extremists’’ if the media were to write about the conference. But how come this conference was the largest Islamic Scandinavian International event that has taken place in Norway with about 4000 people attending? Were the majority of those who attended in opposition to what the speakers were preaching? If so, how come they paid to enter? Let’s forget about that for a moment, let’s imagine that we don’t really knew what all these people thought about for example segregation of men and women, or stoning to death of those who commit adultery. The Chairman of Islam Net, Fahad Ullah Qureshi asked the audience, and the answer was clear. The attendees were common Sunni Muslims. They did not consider themselves as radicals or extremists. They believed that segregation was the right thing to do, both men and women agreed upon this. They even supported stoning or whatever punishment Islam or prophet Muhammad (peace be upon him) commanded for adultery or any other crime. They even believed that these practises should be implemented around the world. Now what does that tell us? Either all Muslims and Islam is radical, or the media is Islamophobic and racist in their presentation of Islam. Islam is not radical, nor is Muslims in general radical. That means that the media is the reason for the hatred against Muslims, which is spreading among the non-Muslims in western countries.
This is a remarkable document. Read it closely, and you will pass through the looking glass. The organizers of this conference believe (with good reason) that “extremist” views are not rare among Muslims, even in the West. And they consider the media’s denial of this fact to be a symptom of… Islamophobia. The serpent of obscurantism has finally begun to devour its own tail. Apparently, it is a sign of racism to imagine that only a tiny minority of Muslims could actually condone the subjugation of women and the murder of apostates. How dare you call us “extremists” when we represent so many? We are not extreme. This is Islam. They have a point. And it is time for secular liberals and (truly) moderate Muslims to stop denying it.
A young man enters a public place—a school, a shopping mall, an airport—carrying a small arsenal. He begins killing people at random. He has no demands, and no one is spared. Eventually, the police arrive, and after an excruciating delay as they marshal their forces, the young man is brought down.
This has happened many times, and it will happen again. After each of these crimes, we lose our innocence—but then innocence magically returns. In the aftermath of horror, grief, and disbelief, we seem to learn nothing of value. Indeed, many of us remain committed to denying the one thing of value that is there to be learned.
After the Boston Marathon bombing, a journalist asked me, “Why is it always angry young men who do these terrible things?” She then sought to connect the behavior of the Tsarnaev brothers with that of Jared Loughner, James Holmes, and Adam Lanza. Like many people, she believed that similar actions must have similar causes.
But there are many sources of human evil. And if we want to protect ourselves and our societies, we must understand this. To that end we should differentiate at least four types of violent actor.
1. Those who are suffering from some form of mental illness that causes them to think and act irrationally. Given access to guns or explosives, these people may harm others for reasons that wouldn’t make a bit of sense even if they could be articulated. We may never hear Jared Loughner and James Holmes give accounts of their crimes, and we do not know what drove Adam Lanza to shoot his mother in the face and then slaughter dozens of children. But these mass murderers appear to be perfect examples of this first type. Aaron Alexis, the Navy Yard shooter, is yet another. What provoked him? He repeatedly complained that he was being bombarded with “ultra low frequency” electromagnetic waves. Apparently, he thought that killing people at random would offer some relief. It seems there is little to understand about the experiences of these men or about their beliefs, except as symptoms of underlying mental illness.
2. Prototypically evil psychopaths who feel no empathy for others and may even derive sadistic pleasure from making the innocent suffer. These people are not delusional. They are malignantly selfish, ruthless, and prone to violence. Our maximum-security prisons are full of such men. Given half a chance and half a reason, psychopaths will harm others—because that is what psychopaths do.
It is worth observing that these first two types trouble us for reasons that have nothing to do with culture, ideology, or any other social variable. Of course, it matters if a psychotic or a psychopath happens to be the head of a nation, or otherwise has power and influence. That is what is so abhorrent about North Korea: The child king is mad, or simply evil, and he’s building a nuclear arsenal while millions starve. But even here, very little is to be learned about what we—the billions of relatively normal human beings struggling to maintain open societies—are doing wrong. We didn’t create Jared Loughner (apart from making it too easy for him to get a gun), and we didn’t create Kim Jong-il (apart from making it too easy for him to get nuclear bombs). Given access to powerful weapons, such people will pose a threat no matter how rational, tolerant, or circumspect we become.
3. Normal men and women who cause immense harm while believing that they are doing the right thing—or while neglecting to notice the consequences of their actions. These people are not insane, and they’re not necessarily bad; they are just part of a system in which the negative consequences of ordinary selfishness and fear can become horribly magnified. Think of a soldier fighting in a war that may be ill conceived, or even unjust, but who has no rational alternative but to defend himself and his friends. Think of a boy growing up in the inner city who joins a gang for protection, only to perpetuate the very cycle of violence that makes gang membership a necessity. Or think of a CEO whose short-term interests motivate him to put innocent lives, the environment, or the economy itself in peril. Most of these people aren’t monsters. However, they can easily create suffering for others that only a monster would bring about by design. This is the true “banality of evil”—whatever Hannah Arendt actually meant by that phrase—but it is worth remembering that not all evil is banal.
4. Those who are moved by ideology to waste their lives in extraordinary ways while doing intolerable harm to others in the process. Some of these belief systems are merely political, or otherwise secular, in that their aim is to bring about specific changes in this world. But the worst of these doctrines are religious—whether or not they are attached to a mainstream religion—in that they are informed by ideas about otherworldly rewards and punishments, prophecies, magic, and so forth, which are especially conducive to fanaticism and self-sacrifice.
Of course, a person can inhabit more than one of the above categories at once—and thus have his antisocial behavior overdetermined. There must be someone somewhere who is simultaneously psychotic and psychopathic, part of a corrupt system, and devoted to a dangerous, transcendent cause. But many examples of each of these types exist in their pure forms.
For instance, in recent weeks, a spate of especially appalling jihadist attacks occurred—one on a shopping mall in Nairobi, where non-Muslims appear to have been systematically tortured before being murdered; one on a church in Peshawar; and one on a school playground in Baghdad, targeting children. Whenever I point out the role that religious ideology plays in atrocities of this kind—specifically the Islamic doctrines related to jihad, martyrdom, apostasy, and so forth—I am met with some version of the following: “Bad people will always do these things. Religion is nothing more than a pretext.” This is an increasingly dangerous misconception to have about the human mind.
Here is my pick for the most terrifying and depressing phenomenon on earth: A smart, capable, compassionate, and honorable person grows infected with ludicrous ideas about a holy book and a waiting paradise, and then becomes capable of murdering innocent people—even children—while in a state of religious ecstasy. Needless to say, this problem is rendered all the more terrifying and depressing because so many of us deny that it even exists.
To imagine that one is a holy warrior bound for Paradise might seem delusional, but we live in a world where perfectly sane people are led to believe such floridly crazy things in the name of religion. This is primarily a social and cultural issue, not a psychological one. There is no clear line between what members of the Taliban, al Qaeda, and al Shabab believe about Islam and the “true” Islam. In fact, these groups have as good a claim as any to being impeccable Muslims. This presents an enormous threat to civil society, which apologists for Islam and secular liberals can now be counted upon to obfuscate. A tsunami of stupidity and violence is breaking simultaneously on a hundred shores, and people like Karen Armstrong, Reza Aslan, Juan Cole, John Esposito, and Glenn Greenwald insist that it’s a beautiful day at the beach. Their determination that “moderate” Islam not be blamed for the acts of “extremists” causes them to deny that genuine (and theologically justifiable) religious beliefs can inspire psychologically normal people to commit horrific acts of violence.
For weeks after the Boston Marathon bombing, we seemed determined to remain confused about the motives of the perpetrators. Had they been “radicalized” by some nefarious person, or did they manage it themselves? Did Tamerlan, the older brother, have brain damage from boxing? Were his dreams dashed by our immigration laws? Experts on terrorism took to the airwaves and gave their analysis: These young men behaved as they did, not on account of Islam, but because they were “jerks” and “losers.”
Or was it just politics, with religion as a pretext? The New York Times reported that the Tsarnaev brothers were “motivated to strike against the United States partly because of its military actions in Iraq and Afghanistan.” Many people seized on this as proof that U.S. foreign policy was to blame. And yet the only plausible way that Chechens coming of age in America could want to murder innocent people in protest over the wars in Iraq and Afghanistan would be for them to accept the Islamic doctrine of jihad. Islam is under attack and it must be defended; infidels have invaded Muslim lands—these grievances are not political. They are religious.
The same obscurantism arose in response to the Woolwich murder—when two jihadists butchered a man on a London sidewalk while shouting “Allahu akbar!” Their actions were repeatedly described as “political”—and the role of Islam in their thinking was reflexively discounted. Why political? Because one of the murderers spoke of British troops in Afghanistan and Iraq invading “our lands” and abusing “our women.” Few seemed to wonder how a Londoner of Nigerian descent could feel possessive about Afghan and Iraqi lands and women. There is only one path through the wilderness of bad ideas that reaches such “political” concerns: Islam.
Take a moment to consider the actions of the Taliban gunman who shot Malala Yousafzai in the head. How is it that this man came to board a school bus with the intention of murdering a 15-year-old girl? Absent ideology, this could have only been the work of a psychotic or a psychopath. Given the requisite beliefs, however, an entire culture will support such evil. Malala is the best thing to come out of the Muslim world in a thousand years. She is an extraordinarily brave and eloquent girl who is doing what millions of Muslim men and women are too terrified to do—stand up to the misogyny of traditional Islam. No doubt the assassin who tried to kill her believed that he was doing God’s work. He was probably a perfectly normal man—perhaps even a father himself—and that is what is so disturbing. In response to Malala’s nomination for the Nobel Peace Prize, a Taliban spokesman had this to say:
Malala Yousafzai targeted and criticized Islam. She was against Islam and we tried to kill her, and if we get a chance again we will definitely try to kill her, and we will feel proud killing her.
The fact that otherwise normal people can be infected by destructive religious beliefs is crucial to understand—because beliefs spread. Until moderate Muslims and secular liberals stop misplacing the blame for this evil, they will remain part of the problem. Yes, our drone strikes in Pakistan kill innocent people—and this undoubtedly creates new enemies for the West. But we wouldn’t need to drop a single bomb on Pakistan, or anywhere else, if a death cult of devout Muslims weren’t making life miserable for millions of innocent people and posing an unacceptable threat of violence to open societies.
Malala did not win a Nobel prize this week, and it is probably good for her that she didn’t. She absolutely deserved it—far more than several recent recipients have—but this recognition would have made her security concerns even more excruciating than they probably are already. Her nomination is said to have noticeably increased anti-Western sentiment in Pakistan—a fact that deserves some honest reflection on the part of Islam’s apologists. If for nothing else, we can be grateful to the Taliban for reminding us of what so many civilized people seem eager to forget: This is both a war of ideas and a very bloody war—and we must win it.
I wrote an article on meditation two years ago, and since then many readers have asked for further guidance on how to practice. As I said in my original post, I generally recommend a method called vipassana in which one cultivates a form of attention widely known as “mindfulness.” There is nothing spooky or irrational about mindfulness, and the literature on its psychological benefits is now substantial. Mindfulness is simply a state of clear, nonjudgmental, and nondiscursive attention to the contents of consciousness, whether pleasant or unpleasant. Developing this quality of mind has been shown to reduce pain, anxiety, and depression; improve cognitive function; and even produce changes in gray matter density in regions of the brain related to learning and memory, emotional regulation, and self-awareness. I will cover the relevant philosophy and science in my next book Waking Up: A Guide to Spirituality Without Religion, but in the meantime, I have produced two guided meditations (9 minutes and 26 minutes) for those of you who would like to get started with the practice. Please feel free to share them.
Sean B. Carroll is the author of Remarkable Creatures, a finalist for the National Book Award; The Making of the Fittest, winner of the Phi Beta Kappa Science Book Award; and Endless Forms Most Beautiful. Carroll also writes a monthly feature, “Remarkable Creatures,” for the New York Times’ Science Times. An internationally known scientist and leading educator, Dr. Carroll currently heads the Department of Science Education of the Howard Hughes Medical Institute and is Professor of Molecular Biology and Genetics at the University of Wisconsin. His new book is Brave Genius: A Scientist, a Philosopher, and Their Daring Adventures from the French Resistance to the Nobel Prize.
From the Publisher:
“I have known only one true genius: Jacques Monod,” claimed Albert Camus. Known to biologists for his Nobel Prize–winning, pioneering genetic research, Monod is credited with some of the most creative and influential ideas in modern biology. But while a few texts mention in passing that Monod was “in the Resistance” and “friends with Albert Camus,” none have examined the impact of the chaos of war on his work, nor the camaraderie between these two extraordinary men—until now. In Brave Genius: A Scientist, a Philosopher, and Their Daring Adventures from the French Resistance to the Nobel Prize, leading evolutionary biologist and National Book Award finalist Sean B. Carroll draws on a wealth of previously unknown and unpublished material to tell the dramatic and inspiring story of the lives of two men who triumphed over overwhelming adversity to pursue the meaning of existence on every level from the molecular to the philosophical.
Sean was kind enough to answer a few questions about his new book.
How did you discover the story of Camus and Monod?
A couple of previous authors mentioned briefly that they were friends, but offered nothing more. It seemed obvious to me from Monod’s writing that Camus had influenced him a great deal, so I wanted to know how well they knew each other and for how long. The first sign that I might make some headway was a letter from Camus to Monod that a family member shared with me. The letter was very warm, and encouraged me to keep looking. That led to another letter, and then to some anecdotes from people who had been together with Camus and Monod. I was also able to unearth some interviews in which Monod discussed Camus. My favorite jackpot came when I asked Monod’s son Olivier whether his father’s Camus books were inscribed. He took a look. They were all personalized with revealing phrases, and out of one dropped another letter… in which Camus was asking for medical advice concerning his mistress’s father!
Eventually, I was able to determine that their friendship spanned more than a decade. More importantly, I was able to place their relationship in the context of other pivotal events in each man’s life. I was even able to determine from some unpublished notes that Camus lifted some of his arguments in his anti-Soviet essay The Rebel directly from Monod. That was very satisfying detective work.
Did the two men view politics in the same way?
Definitely. After the experience of the Occupation of France (during which they both had significant roles in the Resistance), they each felt an obligation to speak out against injustice and oppression. For example, they both publicly condemned the Soviet regime and its French supporters. They both strongly supported the Hungarian revolution – Camus in print, Monod by smuggling scientists out of the country. They also both opposed the death penalty, and that movement eventually succeeded in getting the death penalty abolished in the EU.
Importantly, Camus’s friendship with Monod was blossoming at the same time that Camus’s friendship with Sartre was imploding, and for the same reason – Camus’ s unequivocal condemnation of the Soviet Union as a totalitarian state ruled by a delusional dictator.
What was Camus’s position on the French presence in Algeria?
Camus was neither on the side of the Algerian militants seeking independence from France, nor on the side of the French government that fought the militants, and often imposed very harsh punishments, including the death penalty. Camus thought that there should be some middle ground of greater autonomy for Algeria, but also that the country, so long a part of France and settled by many French, maintain a connection to France. Criticized by both sides, Camus decided to become silent on the matter publicly, and to work behind the scenes.
What did Monod and Camus have in common intellectually?
They shared the drive to push logic as far as one could. Camus did so in The Myth of Sisyphus, Monod in his scientific life and in his book Chance and Necessity. Both men were convinced that this life is the only one we get. Each was keen to pursue the implications of that conclusion.
Monod was influenced a great deal by Camus. The epigraph of Chance and Necessity was the closing paragraphs of The Myth of Sisyphus. Monod thought that Camus had provided the key to living with the knowledge of our finite lifetimes: “The struggle itself toward the heights is enough to fill a man’s heart.”
How did Camus and Monod view the relationship, or conflict, between religion and science?
Camus looked at religion from a philosophical perspective rather than a scientific one. He considered any appeal to religious faith “philosophical suicide,” an abandonment of reason. He thought that the principal issue was how to live this life fully, the only one we are sure to have. Camus’s recipe for living life to the fullest was to do nothing in the hope of an afterlife, but to rely on courage and reason.
In his book Chance and Necessity, Monod expounded at length on the conflict between science and religion. He saw religion as a collection of primitive myths that had been blown to shreds by science. After his experiences in the war and with Soviet ideology, Monod declared that his life’s goal was “a crusade against antiscientific, religious metaphysics, whether it be from Church or State.”
At every turn, Monod emphasized the role of chance in human existence, an idea that is antithetical to essentially every religious doctrine that places humans as some inevitable intention of a Creator. “Man was the product of an incalculable number of fortuitous events,” Monod argued, “the result of a huge Monte-Carlo game, where our number eventually did come out, when it might not well have appeared.”
Bolstered by the new evidence from molecular biology of a universal genetic code and the random nature of the mutation process, Monod delivered the bad news for every religion that asserts some form of Design:
Chance alone is at the source of every innovation, of all creation in the biosphere…this central concept of modern biology is no longer one among other possible hypotheses…it is the only one that squares with observed and tested fact. And nothing warrants the… hope that on this score our position is likely ever to be revised. There is no scientific concept, in any of the sciences, more destructive of anthropocentrism than this one.
I am very happy to announce that my wife and editor, Annaka Harris, has published her first book. The purpose of I Wonder is to teach very young children (and their parents) to cherish the feeling of “not knowing” as the basis of all discovery. In a world riven by false certainties, I can think of no more important lesson to impart to the next generation.
Advance Praise for I Wonder:
“I Wonder offers crucial lessons in emotional intelligence, starting with being secure in the face of uncertainty. Annaka Harris has woven a beautiful tapestry of art, storytelling, and profound wisdom. Any young child—and parent—will benefit from sharing this wondrous book together.”
—Daniel Goleman, author of the #1 bestseller Emotional Intelligence
“What an enchanting children’s book – beautiful to look at, charming to read, and with a theme that wonderers of all ages should appreciate.”
—Steven Pinker, Professor of Psychology, Harvard University, and author of How the Mind Works
“I Wonder captures the beauty of life and the mystery of our world, sweeping child and adult into a powerful journey of discovery. This is a book for children of all ages that will nurture a lifelong love of learning. Magnificent!”
—Daniel Siegel, author of Mindsight and The Whole-Brain Child
“I Wonder is a delightful book that explores and encourages the playful beginnings of wonder and a joyful appreciation of natural mystery.”
—Eric Litwin, author of the #1 New York Times bestselling children’s book, I Love My White Shoes and Pete the Cat
“This marvelous book will successfully sustain and stimulate your child’s natural sense of curiosity and wonder about this mysterious world we live in.”
—V.S. Ramachandran, author of The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human
“I Wonder is a reminder to parents and their children that mysteries are a gift and that curiosity and wonderment are the treasures of a childlike mind.”
—Janna Levin, Professor of Physics and Astronomy, Columbia University, and author of How The Universe Got Its Spots
“I Wonder teaches the very young that we should marvel at the mysteries of the universe and not be afraid of them. Our world would be a lot better if every human understood this. Start with your own children and this book.”
—Jeff Hawkins, founder of Palm, Handspring, and the Redwood Neuroscience Institute, and author of On Intelligence
It has been nearly three years since The Moral Landscape was first published in English, and in that time it has been attacked by readers and nonreaders alike. Many seem to have judged from the resulting cacophony that the book’s central thesis was easily refuted. However, I have yet to encounter a substantial criticism that I feel was not adequately answered in the book itself (and in subsequent talks).
So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view.
Submissions will be accepted here the week of February 2-9, 2014.
*Note 9/1/13: The original prize was $1,000 for the winning essay and $10,000 for changing my view, but a generous reader has made a matching pledge.
1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
You might want to read what I’ve already written in response to a few critics. (A version of that article became the Afterword to the paperback edition of The Moral Landscape.) I also recommend that you watch the talk I linked to above.
2. Can you give some guidance as to what you would consider a proper demolition of your thesis?
If you show (1) that my “worst possible misery for everyone” argument fails, or (2) that other branches of science are self-justifying in a way that a science of morality could never be, or (3) that my analogy to a landscape of multiple peaks and valleys is fatally flawed, or (4) that the fact/value distinction holds in a way that I haven’t yet understood, —you stand a very good chance of torpedoing my argument and changing my mind.
3. What sort of criticism is likely to be ineffective?
You will not win this prize if you attack views I don’t actually hold—which you will probably do if you fail to notice the distinction I make between answers in practice and answers in principle, or if you narrowly define science to mean finding the former while wearing a white lab coat, or if you imagine me to be saying that scientists are more moral than farmers and bricklayers, or if, like the philosopher Patricia Churchland, you manage to do all those things with an air of scornful pomposity appropriate to a Monty Python routine.
4. How can a person be expected to refute a book in 1,000 words or less?
If my core arguments are as misguided as many people apparently believe, it should be easy. However, I have imposed this word limit merely to make the job of vetting the entries manageable. Assuming that the winning essay is a good one, it will most likely serve as an opening statement in a longer exchange. I will give the winning author every reasonable opportunity to persuade me and claim the larger prize.
5. Perhaps I’m being too cynical, but I don’t think anyone will win any money here.
Well, assuming that I receive a single publishable essay, someone is bound to win $2,000. So, yes, you are being too cynical.
6. What do you hope will come from this contest?
I hope to receive at least one essay that presents a very serious challenge to my view. And then I hope to answer that challenge successfully, in a responding essay, or in a written exchange with the author. The second-best case (from my perspective) would be to confront a criticism that I can’t answer, but which I recognize as fatal to my thesis. I would then concede defeat and pay the author the tenfold prize. More generally, I hope to inspire readers—especially students—to debate these ideas.
7. Shouldn’t you provide “official rules” for this contest so that some lunatic who writes an essay channeling Aleister Crowley can’t sue you for not recognizing his brilliance?
Good point. Here are the official rules.
8. Why won’t you accept any submissions before February 2, 2014?
A few reasons: I want to encourage people to do quality work; I don’t want to receive multiple drafts; and I won’t have time to read any submissions until then.
9. With you as the judge, how can we trust that the best attack on your thesis will see the light of day?
Having fielded several accusations that this contest will be rigged—if not by design, then by my own ignorance and bias—I reached out to the philosopher Russell Blackford for help. Russell is among the most energetic critics of The Moral Landscape, and I am very happy to say that he has agreed to judge the submissions, introduce the winning essay, and evaluate my response.
Of course, only I can say whether I find the winning essay persuasive enough to trigger a change in my position (and the larger prize). But if I’m not persuaded, I’ll have to explain why, and Russell will be there to see that I do so without dodging any important points.
10. Why are you issuing this challenge? Science isn’t advanced by contests. Isn’t there something tawdry about offering money for someone to critique your work? This looks like nothing more than a publicity stunt designed to sell books. If you were serious about engaging with your philosophical critics, you would submit your work to an academic journal of philosophy.
I disagree on all counts. Contests are fun and motivating. And, the truth is, I don’t expect to sell many books by issuing this challenge (certainly not enough to cover the grand prize, should the winning essay change my view). I have also given essayists all the resources they need (a lecture and an article) to attack my position. This isn’t about selling books; it’s about engaging with readers—especially those who remain critical of my position on moral truth.
As for the claim that philosophical debates are best pursued in academic journals, I think you are mistaken. More people will read the winning essay on my blog than are likely to read any paper published in an academic journal next year—or any year thereafter. This is not a boast about how much traffic my blog gets, merely a statement about how few people read academic journals. Ideas matter—and philosophy is the art of thinking about them rigorously. In my view, that should be done in as public a forum as possible.