• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Sunday, 13 Apr 2014 04:42

(Photo via h.koppdelaney)

Dan Harris is a co-anchor of Nightline and the weekend edition of Good Morning America on ABC News. He has reported from all over the world, covering wars in Afghanistan, Israel/Palestine, and Iraq, and producing investigative reports in Haiti, Cambodia, and the Congo. He has also spent many years covering religion in America, despite the fact that he is agnostic.

Dan’s new book, 10 Percent Happier: How I Tamed the Voice in My Head, Reduced Stress Without Losing My Edge, and Found Self-Help That Actually Works—A True Story, hit #1 on the New York Times best-seller list.

Dan was kind enough to discuss the practice of meditation with me for this page.

*  *  *


Sam: One thing I love about your book—admittedly, somewhat selfishly—is that it’s exactly the book I would want people to read before Waking Up comes out in the fall. You approach the topic of meditation with serious skepticism—which, as you know, is an attitude that my readers share to an unusual degree. Perhaps you can say something about this. How did you view the practice in the beginning?

Dan: I was incredibly skeptical about meditation. I thought it was for people who lived in yurts or collected crystals or had too many Cat Stevens records. And I was bred for this kind of doubt. My parents are both physicians and scientists at academic hospitals in the Boston area, and my wife is also a scientist and a physician. I was raised in a very secular environment. I had a Bar Mitzvah, but that was mostly because I wanted the money and the social acceptance. My parents were also recovering hippies who made me go to a yoga class when I was a little kid. The teacher didn’t like the jeans I was wearing, so she forced me to take them off and do Sun Salutations in my tighty-whities in front of all the other kids.

Sam: Rarely has the connection between yoga and child abuse been illustrated so clearly.

Dan: No doubt. And the result was that not only was I skeptical about anything bordering on the metaphysical, which I assumed meditation involved, but I had a long-standing aversion to anything touchy-feely or New Agey. Meditation seemed like the quintessence of everything I was most wary of.

Sam: For those who are unfamiliar with meditation—in particular, the practice of mindfulness that we are discussing—I have described it in a previous article on my blog and also posted some guided meditations that many people have found helpful. But, in essence, we are talking about the practice of paying very careful, non-judgmental attention to the contents of consciousness in the present moment. Usually one begins by focusing on the sensation of breathing, but eventually the practice opens to include the full field of experience—other sensations in the body, sounds, emotions, even thoughts themselves. The trick, however, is not to spend one’s time lost in thought.

How did you get started practicing mindfulness, and what was your first experience like?

Dan: Well, the thing that got me to open my mind just a crack was hearing about the science. I think that’s true for a lot of people who have given it a try of late. You hear about the science that says it can do some pretty extraordinary things to your brain and your body: lowering your blood pressure, boosting your immune system, thickening the gray matter in parts of the brain that have to do with self-awareness and compassion, and decreasing the gray matter in the areas associated with stress. That’s all really compelling. I work out because I want to take care of my health, and meditation seemed like it could fall in the same bucket. But my first taste of it was miserable. I set an alarm for five minutes and had a full-on collision with the zoo that is my mind. It was really hard.

Sam: People who haven’t tried to meditate have very little sense that their minds are noisy at all. And when you tell them that they’re thinking every second of the day, it generally doesn’t mean anything to them. It certainly doesn’t strike most of them as pathological. When these people try to meditate, they have one of two reactions: Some are so restless and besieged by doubts that they can hardly attempt the exercise. “What am I doing sitting here with my eyes closed? What is the point of paying attention to the breath?” And, strangely, their resistance isn’t remotely interesting to them. They come away, after only a few minutes, thinking that the act of paying close attention to their experience is pointless.

But then there are the people who have an epiphany similar to yours, where the unpleasant realization that their minds are lurching all over the place becomes a goad to further inquiry. Their inability to pay sustained attention—to anything—becomes interesting to them. And they recognize it as pathological, despite the fact that almost everyone is in the same condition.

Dan: I love your description. Interestingly enough, the door had opened for me before I tried meditation, in the most unexpected way. One of my assignments at ABC News had been to cover basic spirituality. So I had picked up a book by a self-help guru by the name of Eckhart Tolle, who has sold millions of books and is beloved by Oprah. I had read his book not because I thought it would be personally useful to me but because I was considering doing a story on him. Nestled within all his grandiloquent writing and pseudoscientific claims—and just overall weirdness—was a diagnosis of the human condition, which you just articulated quite well, that kind of blew my mind.

It’s this thunderous truism: We all know on some level that we are thinking all the time, that we have this voice in our heads, and the nature of this voice is mostly negative. It’s also repetitive and ceaselessly self-referential. We walk around in this fog of memory about the past and anticipation of a future that may or may not arrive in the form in which we imagine it. This observation seemed to describe me. I realized that the things I’d done in my life that I was most ashamed of had been as a result of having thoughts, impulses, urges, and emotions that I didn’t have the wherewithal to resist. So when I sat down and had that first confrontation with the voice in my head, I knew from having read Eckhart Tolle that it wasn’t going to be pretty, and I was motivated to do something about it.

Sam: Why didn’t you just become a student of Tolle’s?

Dan: I think that Eckhart Tolle is correct, but not useful. I’m stealing that distinction from the meditation teacher Sharon Salzburg. I think his diagnosis is correct, but he doesn’t give you anything to do about it, at least that I could ascertain. He has sold millions of books about “spiritual awakening.” If he were truly useful, we should have a reasonable population of awakened people walking around, and I’m just not seeing them. I found Tolle to be both extraordinarily interesting and extraordinarily frustrating. The lack of any concrete advice was really the source of my frustration, alongside the aforementioned weirdness. I think Tolle deserves credit for articulating a truth of the human condition extremely well. But I also think that it’s a legitimate criticism to say he doesn’t give you anything to do about it.

Sam: It’s interesting that you mention Tolle, because when someone asks me for the two-second summary of my new book, I’m often tempted to say, “It’s Eckhart Tolle for smart people”—that is, people who suspect that something important can be discovered about consciousness through introspection, but who are allergic to the pseudoscience and irrationality that generally creeps into every New Age discussion of this truth. I haven’t read much of Tolle, but I suspect that I largely agree with his view of the subjective insights that come once we recognize the nature of consciousness prior to thought. The self that we all think we have riding around inside our heads is an illusion—and one that can disappear when examined closely. What’s more, we’re much better off psychologically when it does. But from the little reading I’ve done of Tolle, I can see that he also makes some embarrassing claims about the nature of the cosmos—claims that are unjustified both scientifically and philosophically.

However, in the man’s defense, this lack of usefulness you mention is not unique to him. It’s hard to talk about the illusoriness of the self or the non-dual nature of consciousness in a way that makes sense to people.

Dan: You know, I’ve read a little bit about non-duality, but I still don’t fully understand the distinction you’re making. I know you’re supposed to be interviewing me, but I would love to hear more about this from you. I’ve wanted to ask you this question for a long time. What is the non-dual critique of gradual approaches like mindfulness?

Sam: I think the best way to communicate this is by analogy. Everyone has had the experience of looking through a window and suddenly catching sight of his own reflection staring back at him from the glass. At that point, he can use the glass as a window, to see the world outside, or as a mirror, but he can’t do both at the same time.

Sometimes your reflection in the glass is pretty subtle, and you could easily stand there for ten minutes, looking outside while staring right through the image of your own face without seeing it.

For the purposes of this analogy, imagine that the goal of meditation is to see your own reflection clearly in each moment. Most spiritual traditions don’t realize that this can be done directly, and they articulate their paths of practice in ways that suggest that if you only paid more attention to everything beyond the glass—trees, sky, traffic—eventually your face would come into view. Looking out the window is arguably better than closing your eyes or leaving the room entirely—at least you are facing in the right direction—but the practice is based on a fundamental misunderstanding. You don’t realize that you are looking through the very thing you are trying to find in every moment. Given better information, you could just walk up to the window and see your face in the first instant.

The same is true for the illusoriness of the self. Consciousness is already free of the feeling that we call “I.” However, a person must change his plane of focus to realize this. Some practices can facilitate this shift in awareness, but there is no truly gradual path that leads there. Many longtime meditators seem completely unaware that these two planes of focus exist, and they spend their lives looking out the window, as it were. I used to be one of them. I’d stay on retreat for a few weeks or months at a time, being mindful of the breath and other sense objects, thinking that if I just got closer to the raw data of experience, a breakthrough would occur. Occasionally, a breakthrough did occur: In a moment of seeing, for instance, there would be pure seeing, and consciousness would appear momentarily free of any feeling to which the notion of a “self” could be attached. But then the experience would fade, and I couldn’t get back there at will. There was nothing to do but return to meditating dualistically on contents of consciousness, with self-transcendence as a distant goal.

However, from the non-dual side, ordinary consciousness—the very awareness that you and I are experiencing in this conversation—is already free of self. And this can be pointed out directly, and recognized again and again, as one’s only form of practice. So gradual approaches are, almost by definition, misleading. And yet this is where everyone starts.

In criticizing this kind of practice, someone like Eckhart Tolle is echoing the non-dualistic teachings one finds in traditions such as Advaita Vedanta, Zen (sometimes), and Dzogchen. Many of these teachings can sound paradoxical: You can’t get there from here. The self that you think you are isn’t going to meditate itself into a new condition. This is true, but as Sharon says, it’s not always useful. The path is too steep.

Of course, this non-dual teaching, too, can be misleading—because even after one recognizes the intrinsic selflessness of consciousness, one still has to practice that recognition. So there is a point to meditation after all—but it isn’t a goal-oriented one. In each moment of real meditation, the self is already transcended.

Dan: So should I stop doing my mindfulness meditation?

Sam: Not at all. Though I think you could be well served if you ever had the opportunity to study the Tibetan Buddhist practice of Dzogchen.

Dan: Joseph Goldstein, who’s a friend to both of us, recently put out this supplement to daily practice where he says, “Listen to all the sounds that arise in your consciousness and then try to find who or what is hearing them.” I find that when I do that, I’m directed into a space completely different from the one I arrive at when I’m sitting there watching my breath. I’m wondering if that is the kind of shift in attention you’re talking about. Is that what you would recommend as a way to bridge the gap you’ve just described?

Sam: Yes. Looking for the mind, or the thinker, or the one who is looking, is often taught as a preliminary exercise in Dzogchen, and it gets your attention pointed in the right direction. It’s different from focusing on the sensation of breathing. You’re simply turning attention upon itself—and this can provoke the insight I’m talking about. It’s possible to look for the one who is looking and to find, conclusively, that no one is there to be found.

People who have done a lot of meditation practice, who know what it’s like to concentrate deeply on an object like the breath, often develop a misconception that the truth is somewhere deep within. But non-duality is not deep. It’s right on the surface. This is another way the window analogy works well: Your reflection is not far away. You just need to know where to look for it. It’s not a matter of going deeper and deeper into subtlety until your face finally reveals itself. It is literally right before your eyes in every moment. When you turn attention upon itself and look for the thinker of your thoughts, the absence of any center to consciousness can be glimpsed immediately. It can’t be found by going deeper. To go deep—into the breath or any other phenomenon you can notice—is to start looking out the window at the trees.

The trick is to become sensitive to what consciousness is like the instant you try to turn it upon itself. In that first instant, there’s a gap between thoughts that can grow wider and become more salient. The more it opens, the more you can notice the character of consciousness prior to thought. This is true whether it’s ordinary consciousness—you standing bleary-eyed in line at Starbucks—or you’re in the middle of a three-month retreat and your body feels like it’s made of light. It simply doesn’t matter what the contents of consciousness are. The self is an illusion in any case.

It’s also useful to do this practice with your eyes open, because vision seems to anchor the feeling of subject/object duality more than any other sense. Most of us feel quite strongly that we are behind our eyes, looking out at a world that is over there. But the truth—subjectively speaking; I’m not making a claim about physics—is that everything is just appearing in consciousness. Losing the sense of subject/object duality with your eyes open can be the most vivid way to experience this shift in perception. That’s why Dzogchen practitioners tend to meditate with their eyes open.

Dan: So I would look at something and ask myself who is seeing it?

Sam: Yes—but it’s not a matter of verbally asking yourself the question. The crucial gesture is to attempt to turn attention upon itself and notice what changes in that first instant. Again, it’s not a matter of going deep within. You don’t have to work up to this thing. It’s a matter of looking for the looker and in that first moment noticing what consciousness is like. Once you notice that it is wide open and unencumbered by the feeling of self, that very insight becomes the basis of your mindfulness.

Dan: The way you describe it, it’s a practice. I get it. Tolle and the other non-dual thinkers I’ve heard talk aren’t telling us what to do. You’re actually giving me something clear and easy to understand. I think you could use that as a complement to and perhaps even a replacement for the mindfulness practice that stabilizes your attention and helps you recognize that you have an inner life worth focusing on in the first place.

Sam: That’s right. Mindfulness is necessary for any form of meditation. So there’s no contradiction. But there remains something paradoxical about non-dual teachings, because the thing you’re glimpsing is already true of consciousness. Consciousness is already without the sense of self.

Most people feel that the self is real and that they’re going to somehow unravel it—or, if it’s an illusion, it is one that requires a protracted process of meditation to dispel. One gets the sense in every dualistic approach that there’s nothing to notice in the beginning but the evidence of one’s own unenlightenment. Your mind is a mess that must be cleaned up. You’re at the base of the mountain, and there’s nothing to do but schlep to the top.

The non-dual truth is that consciousness is already free of this thing we think we have in our heads—the ego, the thinker of thoughts, the grumpy homunculus. And the intrinsic selflessness of consciousness can be recognized, right now, before you make any effort to be free of the self through goal-oriented practice. Once you have recognized the way consciousness already is, there is still practice to do, but it’s not the same as just logging your miles of mindfulness on the breath or any other object of perception.

Dan: I appreciate what you’re saying, but it seems to present a communication challenge or PR problem. I think most people will buy the basic argument for mindfulness. We all know that we eat when we’re not hungry, check our email when we’re supposed to be listening to our kids, or lose our temper, and then we regret these things later. We all know that we’re yanked around by our emotions. So most people will readily see the value of having more self-awareness so that they can have more—for lack of a better term—emotional intelligence. However, I don’t know that it will be readily apparent to most people why it would be desirable to see the self as an illusion. I don’t even know that most people have considered the nature of the self at all, because I certainly hadn’t. So to ask them to take the further step of considering whether it is an illusion—that requires a lot of work to even wrap your head around. That seems to me to be one of the big issues for non-dualists.

Sam: I agree. It’s a more esoteric concern, almost by definition—but it’s a more fundamental one as well. It’s the distinction between teaching mindfulness in a clinical or self-help context—whether to the Marines, to enhance their performance, or as a form of stress reduction in a hospital or a psychotherapy practice—and going on silent retreat for months in the hope of recapitulating the insights of a great contemplative like the Buddha. Some people really want to get to the root of the problem. But most just want to feel better and achieve more in their lives. There’s nothing wrong with that—until one realizes that there is something wrong with it. The wolf never quite leaves the door.

Ultimately, no matter how much you improve your game, you still have a problem that seems to be structured around this feeling you call “I”—which, strangely, is not quite identical to this body of yours that is growing older and less reliable by the hour. You still feel that you are this always-ready-to-be-miserable center of consciousness that is perpetually driven to do things in the hope of feeling better.

And if you’re practicing mindfulness or some other form of meditation as a remedy for this discomfort, you are bound to approach it in the same dilemma-based way that you approach everything else in life. You’re out of shape, so you go to the gym. You feel a little run down, so you go to the doctor. You didn’t get enough sleep, so you drink an extra cup of coffee. We’re constantly bailing water in this way. Mindfulness becomes a very useful tool to help yourself feel better, but it isn’t fundamentally different from any of these other strategies when we use it that way.

For instance, many of us hate to be late and find ourselves rushing at various points in the day. This is a common pattern for me: I get uptight about being late, and I can feel the cortisol just dump into my bloodstream. It’s possible to practice mindfulness as a kind of remedy for this problem—to notice the feeling of stress dispassionately, and to disengage from one’s thoughts about it—but it is very hard to escape the sense that one is using mindfulness as an antidote and trying to meditate the unpleasant feelings away. Technically, it’s not true mindfulness at that point, but even when one is really balanced with one’s attention, there is still the feeling that one is patiently contemplating one’s own neurosis. It is another thing entirely to recognize that there is no self at the center of this storm in the first place.

The illusoriness of the self is potentially of great interest to everyone, because this false construct really is our most basic problem in every moment. But there is no question that this truth is harder to communicate than the benefits of simply being more self-aware, less reactive, more concentrated, and so forth.

Dan: This is exactly why my book is a great prologue to yours.

Sam: Absolutely. And you’ve written a book that I could never have written. I became interested in meditation relatively early in life. I was a skeptical person, but I was only 19, so I didn’t have all the reasons you had to be skeptical when you first approached the practice. Nor did I have a career, so I wasn’t coming from the same fascinating context in which you recognized that something was wrong with your approach to life. I think your book will be incredibly useful to people.

Can you say something about what it was like to go on retreat for the first time? What sort of resistance did you have? And what was it like to punch through it?

Dan: I blame the entire experience on you. It was largely your idea, and you got me into the retreat—which, to my surprise, was hard to get into. I had no idea that so many people wanted to sign up for ten days of no talking, vegetarian food, and 12 hours a day of meditation, which sounded like a perfect description of one of the inner circles of Dante’s Inferno to me.

As you can gather from the previous sentences, I did not look forward to the experience at all. However, I knew as a budding meditator that this was the next step to take. When we met backstage at the debate you and Michael Shermer did with Deepak Chopra and Jean Houston, which I moderated for Nightline, I realized for the first time that you were a meditator. You recommended that I go on this retreat, and it was almost as if I’d received a dare from a cool kid I admired. I felt like I really needed to do this. It was as horrible as I’d thought it would be for a couple of days. On day four or five I thought I might quit, but then I had a breakthrough.

Sam: Describe that breakthrough. What shifted?

Dan: As I say in the book, it felt as if I had been dragged by my head by a motorboat for a few days, and then, all of the sudden, I got up on water skis. When you’re hauled kicking and screaming into the present moment, you arrive at an experience of the mind that is, at least for me, totally new. I could see very clearly the ferocious rapidity of the mind—how fast we’re hearing, seeing, smelling, feeling, wanting—and that this is our life. We are on the receiving end of this fire hose of mental noise. That glimpse ushered in the happiest 36 hours of my life. But, as the Buddha liked to point out, nothing lasts—and that did not last.

Sam: It’s amazing to realize for the first time that your life doesn’t get any better than your mind is: You might have wonderful friends, perfect health, a great career, and everything else you want, and you can still be miserable. The converse is also true: There are people who basically have nothing—who live in circumstances that you and I would do more or less anything to avoid—who are happier than we tend to be because of the character of their minds. Unfortunately, one glimpse of this truth is never enough. We have to be continually reminded of it.

Dan: This reminds me of the Buddhist concept of suffering. The term “suffering” has certain connotations in English and, as you know, it’s a poor translation of the original Pali term dukkha. The Buddhist concept describes the truth of our existence, which is that nothing is ever ultimately satisfying.

As you said, you can have great friends and live pretty high on the socioeconomic ladder—your life can be a long string of pleasurable meals, vacations, and encounters with books and interesting people—and, yes, you can still have what Eckhart Tolle describes as a background static of perpetual discontent. This is why we see rock stars with drug problems and lottery winners who kill themselves. There is something very powerful about that realization.

Sam: And this is why training the mind through meditation makes sense—because it’s the most direct way to influence the mechanics of your own experience. To remain unaware of this machinery—in particular, the automaticity of thought—is to simply be propelled by it into one situation after another in which you struggle to find lasting fulfillment amid conditions that can’t provide it.

Dan: What’s interesting is that so many people reflexively reject this—just as I would have five or six years ago—because of their misconceptions about meditation. I think there are two reasons why people don’t meditate. Either they think it’s complete baloney that involves wearing robes, lighting incense, and subscribing to some useless metaphysical program, or they accept the fact that it might be good for them, but they assume that they couldn’t do it because their minds are too busy. I refer to this second reason as “the fallacy of uniqueness.” If you think that your mind is somehow busier than everyone else’s—welcome to the human condition. Everyone’s mind is busy. Meditation is hard for everybody.

Sam: The first source of resistance you mentioned is especially prevalent among smart, skeptical people. And I’m a little worried that the way in which many of us respond to this doubt ultimately sells the whole enterprise short. For instance, consider the comparison people often make between meditation and physical exercise—in fact, you drew this analogy already. At first glance, it’s a good one, because nothing looks more ridiculous on its face than what most of us do for exercise. Take the practice of lifting weights: If you try to explain weightlifting to someone who has no understanding of fitness, the wisdom of repeatedly picking up heavy objects and putting them down again is very difficult to get across. And until you’ve actually succeeded at building some muscle, it feels wrong too. So it is easy to see why a naïve person would say, “Why on earth would I want to waste my time and energy doing that?” Of course, most people understand that lifting weights is one of the best things they can do if they want to retain muscle mass, protect their joints from injury, feel better, etc. It’s also extraordinarily satisfying, once a person gets into it.

Meditation presents a similar impasse at first. Everyone asks, “Why would I want to pay attention to my breath?” It seems like a shameful waste of time. So the analogy to exercise is inviting and probably useful, but it doesn’t quite get at what is so revolutionary about finally paying attention to the character of one’s own mental life in this way.

Truly learning to meditate is not like going to the gym and putting on some muscle because it’s good for you and makes you feel better. There’s more to it than that. Meditation—again, done correctly—puts into question more or less everything you tend to do in your search for happiness. But if you lose sight of this, it can become just another strategy for seeking happiness—a more refined version of the problem you already have.

Dan: I’m guilty of using the exercise analogy repeatedly. My feeling—and I think you’d agree with this—is that the analogy is good enough to get people in the door. It may be misleading, but I don’t think in a harmful way. Obviously, when done correctly, meditation is much more transformative than ordinary exercise, but you need to meet people where they are. I think that mindfulness, and potentially even non-duality, has the potential to become the next public health revolution, or the spirituality of the future. In order for that to happen, you need to communicate with people in a way that they can understand. Not to keep wailing on Eckhart Tolle, but part of my problem with him is that I just don’t know that anybody actually understands what he’s saying, despite the fact that he has sold millions of books.

Sam: This raises the question of how to evaluate the results of a spiritual practice—and whether those results, however transformative they may be for someone, can be credible to others.

What constitutes evidence that there is a path to wisdom at all? From the outside, it’s very difficult to judge—because there are charismatic charlatans who are probably lying about everything, and there are seemingly ordinary people who have had quite profound experiences. From the inside, however, the evidence is clear; so each person has to run the experiment in the laboratory of his own mind to know that there’s anything to this.

The truth is that most of us are bound to appear like ordinary schmucks to others no matter how much we meditate. If you’re lost in thought, as you will be most of the time, you become the mere puppet of whatever those thoughts are. If you’re lost in worries about the future, you will seem to be an ordinary, anxious person—and the fact that you might be punctuating this experience with moments of mindfulness or moments of non-duality isn’t necessarily going to change the way you appear in the world. But internally, the difference can be huge. This gap between first-person and third-person data is a real impediment to communicating the significance of meditation practice to people who haven’t experienced it.

Dan: I agree, although, as we’ve already mentioned, there are some external manifestations that one can measure—changes in the brain, lowered blood pressure, boosted immune function, lowered cortisol, and so forth. People find these things compelling, and once they get in the door, they can experience the practice from the inside.

I would also say—and perhaps you were just getting into this—it’s hard to gauge whether some spiritual teachers are telling the truth. I’ve been privileged to meet many of these people, and I just go by my gut sense of whether they’re full of crap or not.

I have to say that with Eckhart Tolle, I did not get that feeling. I got the sense that he is for real. I don’t understand a lot of what he’s saying, but I didn’t feel that he was lying to himself or to me. Obviously this isn’t really data, but I found it personally convincing. To what end, I don’t know.

Sam: As distinct, say, from our friend with the rhinestone glasses…

Dan: Correct. I think I say in the book that I had no questions about whether Tolle was authentic, although I had many questions about whether he was sane. It was the reverse with Deepak Chopra.

Sam: Now I find myself in the unusual position of rising to Deepak’s defense—I think this happens once a decade, when the planets align just so. As I was saying before, a person like Deepak could have authentic and life-transforming experiences in meditation that nevertheless failed to smooth out the quirks in his personality. If he spends most of his time lost in thought, it will not be obvious to us that he enjoys those moments of real freedom. We will inevitably judge him by the silly things he says and the arrogance with which he says them.

But I’ve learned, as a result of my humbling encounters with my own mind, to charitably discount everyone else’s psychopathology. So if a spiritual teacher flies into a rage or even does something starkly unethical, that is not, from my point of view, proof that he or she is a total fraud. It’s just evidence that he or she is spending some significant amount of time lost in thought. But that’s to be expected of anybody who’s not “fully enlightened,” if such a rarefied state is even possible. I’m not saying that every guru is worth listening to—I think most aren’t, and some are genuinely dangerous. But many talented contemplatives can appear quite ordinary. And, unfortunately, cutting through the illusion of the self doesn’t guarantee that you won’t say something stupid at the next opportunity.

Dan: I fully agree with you. I enjoy picking on Deepak, but the truth is that I like the guy.

Sam: Let’s leave it there, Dan. It was great speaking with you, and I wish you continued success with your book.

Dan: Many thanks, Sam.

Startling, provocative, and often very funny . . . [10% HAPPIER] will convince even the most skeptical reader of meditation’s potential. (Gretchen Rubin, author of The Happiness Project)

10% HAPPIER is hands down the best book on meditation for the uninitiated, the skeptical, or the merely curious. . . . an insightful, engaging, and hilarious tour of the mind’s darker corners and what we can do to find a bit of peace. (Daniel Goleman, author of Emotional Intelligence and Focus)

The science supporting the health benefits of meditation continues to grow as does the number of Americans who count themselves as practitioners but, it took reading 10% HAPPIER to make me actually want to give it a try. (Richard E. Besser, M.D., Chief Health and Medical Editor, ABC News)

An enormously smart, clear-eyed, brave-hearted, and quite personal look at the benefits of meditation that offers new insights as to how this ancient practice can help modern lives while avoiding the pitfall of cliché. This is a book that will help people, simply put. (Elizabeth Gilbert, author of Eat, Pray, Love)

This brilliant, humble, funny story shows how one man found a way to navigate the non-stop stresses and demands of modern life and back to humanity by finally learning to sit around doing nothing. (Colin Beavan, author of No Impact Man)

In 10% Happier, Dan Harris describes in fascinating detail the stresses of working as a news correspondent and the relief he has found through the practice of meditation. This is an extremely brave, funny, and insightful book. Every ambitious person should read it. (Sam Harris, author of The End of Faith)

A compellingly honest, delightfully interesting, and at times heart-warming story of one highly intelligent man’s life-changing journey towards a deeper understanding of what makes us our very best selves. As Dan’s meditation practice deepens, I look forward to him being at least 11% happier, or more. (Chade-Meng Tan, author of Search Inside Yourself)

10% Happier is a spiritual adventure from a master storyteller. Mindfulness can make you happier. Read this to find out how. (George Stephanopoulos)

 

Author: "--" Tags: "Book News, Consciousness, Publishing, Me..."
Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 16:02

By Barbara Ehrenreich

Go to article

image

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 16:00

By Mark Oppenheimer

Go to article

image

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 15:59

By Ross Douthat

Go to article

image

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Sunday, 06 Apr 2014 15:56

By Ross Douthat

Got to article

image

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Tuesday, 01 Apr 2014 14:22

By Michael Shermer

Go to article

image

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Tuesday, 25 Mar 2014 18:13

(Photo via Bala Sivakumar)

I am often asked what will replace organized religion. The answer, I believe, is nothing and everything. Nothing need replace its ludicrous and divisive doctrines—such as the idea that Jesus will return to earth and hurl unbelievers into a lake of fire, or that death in defense of Islam is the highest good. These are terrifying and debasing fictions. But what about love, compassion, moral goodness, and self-transcendence? Many people still imagine that religion is the true repository of these virtues. To change this, we must begin to think about the full range of human experience in a way that is as free of dogma, cultural prejudice, and wishful thinking as the best science already is. That is the subject of my next book, Waking Up: A Guide to Spirituality Without Religion.

Authors who attempt to build a bridge between science and spirituality tend to make one of two mistakes: Scientists generally start with an impoverished view of spiritual experience, assuming that it must be a grandiose way of describing ordinary states of mind—parental love, artistic inspiration, awe at the beauty of the night sky. In this vein, one finds Einstein’s amazement at the intelligibility of Nature’s laws described as though it were a kind of mystical insight.

New Age thinkers usually enter the ditch on the other side of the road: They idealize altered states of consciousness and draw specious connections between subjective experience and the spookier theories at the frontiers of physics. Here we are told that the Buddha and other contemplatives anticipated modern cosmology or quantum mechanics and that by transcending the sense of self, a person can realize his identity with the One Mind that gave birth to the cosmos.

In the end, we are left to choose between pseudo-spirituality and pseudo-science.

Few scientists and philosophers have developed strong skills of introspection—in fact, many doubt that such abilities even exist. Conversely, many of the greatest contemplatives know nothing about science. I know brilliant scientists and philosophers who seem unable to make the most basic discriminations about their own moment to moment experience; and I have known contemplatives who spent decades meditating in silence who probably thought the earth was flat. And yet there is a connection between scientific fact and spiritual wisdom, and it is more direct than most people suppose. 

I have been waiting for more than a decade to write Waking Up. Long before I saw any reason to criticize religion (The End of Faith, Letter to a Christian Nation), or to connect moral and scientific truths (The Moral Landscape, Free Will), I was interested in the nature of human consciousness and the possibility of spiritual experience. In Waking Up, I do my best to show that a certain form of spirituality is integral to understanding the nature of our minds. (For those of you who recoil at every use of the term “spirituality,” I recommend that you read a previous post.)

My goal in Waking Up is to help readers see the nature of their own minds in a new light. The book is by turns a seeker’s memoir, an introduction to the brain, a manual of contemplative instruction, and a philosophical unraveling of what most people consider to be the center of their inner lives: the feeling of self we call “I.” It is also my most personal book to date.

If you live in the U.S. or Canada, you can order a special hardcover edition of Waking Up through this website. This edition of the book will have the same text as the trade version, but it will be printed on nicer paper and have several other aesthetic enhancements. Simon and Schuster will be doing only one printing, and all orders must be placed by April 15th. Proceeds from the sale of the special edition of Waking Up will be used to develop an online course on the same topic.

Author: "--" Tags: "Announcements, Atheism, Book News, Consc..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 18 Mar 2014 02:14
Author: "--" Tags: "Debates with Sam Harris,"
Send by mail Print  Save  Delicious 
Date: Monday, 10 Mar 2014 19:24

Peter Watson is an intellectual historian, journalist, and the author of thirteen books, including The German Genius, The Medici Conspiracy, and The Great Divide. He has written for The Sunday Times, The New York Times, the Observer, and the Spectator. He lives in London.

He was kind enough to answer a few question about his new book The Age of Atheists: How We Have Sought to Live Since the Death of God.

*  *  *


1. You begin your account of atheism with the 19th-century German philosopher, Friedrich Nietzsche. Why is he a good starting point?

In 1882 Nietzsche declared, roundly, in strikingly clear language, that “God is dead”, adding that we had killed him. And this was a mere twenty years after Darwin’s Origin of Species, which is rightly understood as the greatest blow to Christianity. But Nietzsche’s work deserves recognition as a near-second. Darwinism was assimilated more quickly in Germany than in Britain, because the idea of evolution was especially prevalent there. Darwin remarks in one of his letters that his ideas had gone down better in Germany than anywhere else. And the history of Kulturkampf in Germany – the battle between Protestantism and Catholicism – meant that religion was under attack anyway, by its own adherents.  Other people responded to Nietzsche more than to anyone else – Ibsen, for example, W. B. Yeats, Robert Graves, James Joyce. In Germany there was the phenomenon of the Nietzschean generations – young people who lived his philosophy in specially-created communities. And people responded to Nietzsche because, his writing style was so pithy, to the point, memorable, and crystal clear. It is Nietzsche who tells us plainly, eloquently, that there is nothing external to, or higher than, life itself, no “beyond” or “above”, no transcendence and nothing metaphysical. This was dangerous thinking at the time, and has remained threatening for many people.


2. You say at one point in your book that psychology, or perhaps therapy, has taken over from religion as a way to understand our predicament, and – to an extent – deal with it. Do you follow Freud in viewing religion as, essentially, a product of neurosis?

I do think there is sound anthropological evidence that the first “priests”, the shamans of Siberia, were probably psychological misfits or malcontents, and that throughout history we have gone on from there, because many well known religious figures – some of the Hebrew prophets, John the Baptist, St. Paul, St. Augustine, Joan of Arc, Luther – were psychologically odd. Religion is not so much neurosis as psychological adjustment to our predicament – that’s the key, religion is to be understood psychologically, not theologically. It was George Carey, when he was archbishop of Canterbury, not me, who said “Jesus the Saviour is becoming Jesus the Counselor”. (This was in the 1990s.) And it was a well known Boston rabbi, Joshua Loth Liebman, who, soon after the end of World War Two, wrote a best-selling book that admitted that traditional religion had been too harsh on ordinary believers and that the churches and the synagogues and the mosques had a great deal to learn from what he called the new depth psychology – he meant Freudianism. So the church invited the psychologists to put their tanks on its lawn, so to speak. And psychotherapy hasn’t looked back. More people go into therapy now as a search for meaning than for treatment for mental illness.


3. What do you conclude from this?

That worship, the religious impulse, is best understood as a sociological phenomenon, rather than a theological one. In your own books you point up some of the absurdities of religion, but the two I regard as most revealing are the worship of a Royal Enfield motor-bicycle in a region of India, a bike involved in a crash in which its driver was killed but now is reckoned to have supernatural powers. And second, the Internet site, godchecker.com, which lists – apparently without irony – more than 3,000 “supreme beings.” I wonder how many fact-checkers they have. (That last sentence is written in a new type-face I have invented, called Ironics.)

In the recent world-wide survey of religion and economics by Pippa Norris and Ronald Inglehart, they show convincingly that religion is expanding in those areas of the world where ‘existential insecurity’ – poverty, natural disasters, disease, inadequate water supplies, HIV/AIDS, the lack of decent health care – is endemic and growing, whereas in the more prosperous and secure West, including now the USA, atheism is inexorably on the rise. Religion is prevalent among the poor and in decline in the more prosperous parts of the world. It is less that religion is on the rise as poverty is.


4. In your book you survey the views of a great number of people. How would you describe your own atheism?

We are gifted with language and Nietzsche had a gift for language. I follow people like the German poet Rilke and the American philosopher Richard Rorty who say that our way to find meaning in life is to use language to “name” the world, to describe new aspects of it that haven’t been described before, and in so doing enlarge the world we inhabit, enlarge it for everyone. This links science and the arts, in particular poetry. When new sciences are invented they bring with them new language, and scientific discoveries – continental drift, say, dendrochronology, the Higgs boson – that enlarge our understanding precisely through incorporating new language. But so does the best art, the best poetry, the best theatre.  This is therefore an exercise for the informed – increasingly the very well informed, as the more mature sciences are now more or less inaccessible to the layman. Language enables us to be both precise about the world, and to generalize. As a result we know that life is made up of lots and lots of beautiful little phenomena, and that large abstractions, however beautiful in their own way, are not enough. There is no one secret to life, other than that there is no one secret to life. If you must have a transcendent idea then make it a search for “the good” or “the beautiful” or “the useful”, always realizing that your answers will be personal, finite and never final. The Anglo-American philosopher Alasdair MacIntyre said “The good life is the life spent seeking the good life.” That implies effort. We can have no satisfaction, no meaning, without effort.


 

 

Author: "--" Tags: "Announcements, Atheism, Book News, Publi..."
Send by mail Print  Save  Delicious 
Date: Monday, 24 Feb 2014 00:56

By Paul Bloom

Go to article

Author: "--" Tags: "Interviews and Appearances, Print,"
Send by mail Print  Save  Delicious 
Date: Thursday, 20 Feb 2014 06:25

change

(Photo via Simon X)

My recent collision with Daniel Dennett on the topic of free will has caused me to reflect on how best to publicly resolve differences of opinion. In fact, this has been a recurring theme of late. In August, I launched the Moral Landscape Challenge, an essay contest in which I invited readers to attack my conception of moral truth. I received more than 400 entries, and I look forward to publishing the winning essay later this year. Not everyone gets the opportunity to put his views on the line like this, and it is an experience that I greatly value. I spend a lot of time trying to change people’s beliefs, but I’m also in the business of changing my own. And I don’t want to be wrong for a moment longer than I have to be.

In response to the Moral Landscape Challenge, the psychologist Jonathan Haidt issued a challenge of his own: He bet $10,000 that the winning essay will fail to persuade me. This wager seems in good fun, and I welcome it. But Haidt then justified his confidence by offering a pseudo-scientific appraisal of the limits of my intellectual honesty. He did this by conducting a keyword search of my books: The frequency of “certainty” terms, Haidt says, reveals that I (along with the other “New Atheists”) am even more blinkered by dogmatism and bias than Sean Hannity, Glenn Beck, and Ann Coulter. This charge might have been insulting if it weren’t so silly. It is almost impossible to believe that Haidt expects his “research” on this topic to be taken seriously. But apparently he does. In fact, he claims to be continuing it.

Consider the following two passages (keywords expressing “certainty” are in boldface):

It is obvious and undeniable—and must always be remembered—that the Bible was the product of merely human minds. As such, it cannot provide true answers to every factual question that will arise for us in the 21st century.

It is obvious and undeniable—and must always be remembered—that the Bible was the product of Divine Omniscience. As such, it provides true answers to every factual question that will arise for us in the 21st century.

According to Haidt’s methodology, these passages exhibit the same degree of dogmatism. I hope it won’t appear too expressive of certainty on my part to observe how terrifically stupid that conclusion is. If, as Haidt alleges, verbal reasoning is just a way for people to “guard their reputations,” one wonders why he doesn’t use it for that purpose.

Haidt is right to observe that anyone can be misled by his own biases. (He just has the unfortunate habit of writing as though no one else understands this.) I will also concede that I don’t tend to lack confidence in my published views (that is one of the reasons I publish them). After my announcement of the Moral Landscape Challenge, a few readers asked whether I’ve ever changed my mind about anything. I trust my wrangling with Dennett has only deepened this sense of my incorrigibility. Perhaps it is worth recalling more of Haidt’s adamantine wisdom on this point:

[T]he benefits of disconfirmation depend on social relationships. We engage with friends and colleagues, but we reject any critique from our enemies.

Well, then I must be a very hard case. I received a long and detailed criticism of my work from a friend, Dan Dennett, and found it totally unpersuasive. How closed must I be to the views of my enemies?

Enter Jeremy Scahill: I’ve never met Scahill, and I’m not aware of his having attacked me in print, so it might seem a little paranoid to categorize him as an “enemy.” But he recently partnered with Glenn Greenwald and Murtaza Hussain to launch The Intercept, a new website dedicated to “fearless, adversarial journalism.” Greenwald has worked very hard to make himself my enemy, and Hussain has worked harder still. Both men have shown themselves to be unprofessional and unscrupulous whenever their misrepresentations of my views have been pointed out. This is just to say that, while I don’t usually think of myself as having enemies, if I were going to pick someone to prove me wrong on an important topic, it probably wouldn’t be Jeremy Scahill. I am, in Haidt’s terms, highly motivated to reason in a “lawyerly” way so as not to give him the pleasure of changing my mind. But change it he has.

Generally, I have supported President Obama’s approach to waging our war against global jihadism, and I’ve always assumed that I would approve of his targets and methods were I privy to the same information he is.  I’ve also said publicly, on more than one occasion, that I thought our actions should be mostly covert. So the president’s campaign of targeted assassination has had my full support, and I lost no sleep over the killing of Anwar al-Awlaki. To me, the fact that he was an American citizen was immaterial.

I have also been very slow to worry about NSA eavesdropping. My ugly encounters with Greenwald may have colored my perception of this important story—but I just don’t know what I think about Edward Snowden. Is he a traitor or a hero? It still seems too soon to say. I don’t know enough about the secrets he has leaked or the consequences of his leaking them to have an opinion on that question.

However, last night I watched Scahill’s Oscar-nominated documentary Dirty Wars—twice. The film isn’t perfect. Despite the gravity of its subject matter, there is something slight about it, and its narrow focus on Scahill seems strangely self-regarding. At moments, I was left wondering whether important facts were being left out. But my primary experience in watching this film was of having my settled views about U.S. foreign policy suddenly and uncomfortably shifted. As a result, I no longer think about the prospects of our fighting an ongoing war on terror in quite the same way. In particular, I no longer believe that a mostly covert war makes strategic or moral sense. Among the costs of our current approach are a total lack of accountability, abuse of the press, collusion with tyrants and warlords, a failure to enlist allies, and an ongoing commitment to secrecy and deception that is corrosive to our politics and to our standing abroad.

Any response to terrorism seems likely to kill and injure innocent people, and such collateral damage will always produce some number of future enemies. But Dirty Wars made me think that the consequences of producing such casualties covertly are probably far worse. This may not sound like a Road to Damascus conversion, but it is actually quite significant. My view of specific questions has changed—for instance, I now believe that the assassination of al-Awlaki set a very dangerous precedent—and my general sense of our actions abroad has grown conflicted. I do not doubt that we need to spy, maintain state secrets, and sometimes engage in covert operations, but I now believe that the world is paying an unacceptable price for the degree to which we are doing these things. The details of how we have been waging our war on terror are appalling, and Scahill’s film paints a picture of callousness and ineptitude that shocked me. Having seen it, I am embarrassed to have been so trusting and complacent with respect to my government’s use of force.

Clearly, this won’t be the last time I’ll be obliged to change my mind. In fact, I’m sure of it. Some things one just knows because they are altogether obvious—and, well, undeniable. At least, one always denies them at one’s peril. So I remain committed to discovering my own biases. And whether they are blatant, or merely implicit, I will work extremely hard to correct them. I’m also confident that if I don’t do this, my readers will inevitably notice. It’s necessary that I proceed under an assurance of my own fallibility—never infallibility!—because it has proven itself to be entirely accurate, again and again. I’m certain this would remain true were I to live forever. Some things are just guaranteed. I think that self-doubt is wholly appropriate—essential, frankly—whenever one attempts to think precisely and factually about anything—or, indeed, about everything.  Being a renowned scientist, Jonathan Haidt must fundamentally agree. I urge him to complete his research on my dogmatism and cognitive closure at the soonest opportunity. The man has a gift—it is pure and distinct and positively beguiling. He mustn’t waste it.[1]


NOTES

  1. Haidt used the following keywords to conduct a searching, scientific analysis of “New Atheist” books: absolute, absolutely, accura*, all, altogether, always, apparent, assur*, blatant*, certain*, clear, clearly, commit, commitment*, commits, committ*, complete, completed, completely, completes, confidence, confident, confidently, correct*, defined, definite, definitely, definitive*, directly, distinct*, entire*, essential, ever, every, everybod*, everything*, evident*, exact*, explicit*, extremely, fact, facts, factual*, forever, frankly, fundamental, fundamentalis*, fundamentally, fundamentals, guarant*, implicit*, indeed, inevitab*, infallib*, invariab*, irrefu*, must, mustnt, must’nt, mustn’t, mustve, must’ve, necessar*, never, obvious*, perfect*, positiv*, precis*, proof, prove*, pure*, sure*, total, totally, true, truest, truly, truth*, unambigu*, undeniab*, undoubt*, unquestion*, wholly.

 

Author: "--" Tags: "Atheism, Philosophy, Debates, Religion,"
Send by mail Print  Save  Delicious 
Date: Wednesday, 12 Feb 2014 22:27

free will

(Photo via Max Boschini)

Dear Dan—

I’d like to thank you for taking the time to review Free Will at such length. Publicly engaging me on this topic is certainly preferable to grumbling in private. Your writing is admirably clear, as always, which worries me in this case, because we appear to disagree about a great many things, including the very nature of our disagreement.

I want to begin by reminding our readers—and myself—that exchanges like this aren’t necessarily pointless. Perhaps you need no encouragement on that front, but I’m afraid I do. In recent years, I have spent so much time debating scientists, philosophers, and other scholars that I’ve begun to doubt whether any smart person retains the ability to change his mind. This is one of the great scandals of intellectual life: The virtues of rational discourse are everywhere espoused, and yet witnessing someone relinquish a cherished opinion in real time is about as common as seeing a supernova explode overhead. The perpetual stalemate one encounters in public debates is annoying because it is so clearly the product of motivated reasoning, self-deception, and other failures of rationality—and yet we’ve grown to expect it on every topic, no matter how intelligent and well-intentioned the participants. I hope you and I don’t give our readers further cause for cynicism on this front.

Unfortunately, your review of my book doesn’t offer many reasons for optimism. It is a strange document—avuncular in places, but more generally sneering. I think it fair to say that one could watch an entire season of Downton Abbey on Ritalin and not detect a finer note of condescension than you manage for twenty pages running.

I am not being disingenuous when I say this museum of mistakes is valuable; I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves. I have always suspected that many who hold this hard determinist view are making these mistakes, but we mustn’t put words in people’s mouths, and now Harris has done us a great service by articulating the points explicitly, and the chorus of approval he has received from scientists goes a long way to confirming that they have been making these mistakes all along. Wolfgang Pauli’s famous dismissal of another physicist’s work as “not even wrong” reminds us of the value of crystallizing an ambient cloud of hunches into something that can be shown to be wrong. Correcting widespread misunderstanding is usually the work of many hands, and Harris has made a significant contribution.

I hope you will recognize that your beloved Rapoport’s rules have failed you here. If you have decided, according to the rule, to first mention something positive about the target of your criticism, it will not do to say that you admire him for the enormity of his errors and the recklessness with which he clings to them despite the sterling example you’ve set in your own work. Yes, you may assert, “I am not being disingenuous when I say this museum of mistakes is valuable,” but you are, in truth, being disingenuous. If that isn’t clear, permit me to spell it out just this once: You are asking the word “valuable” to pass as a token of praise, however faint. But according to you, my book is “valuable” for reasons that I should find embarrassing. If I valued it as you do, I should rue the day I wrote it (as you would, had you brought such “value” into the world). And it would be disingenuous of me not to notice how your prickliness and preening appears: You write as one protecting his academic turf. Behind and between almost every word of your essay—like some toxic background radiation—one detects an explosion of professorial vanity.

And yet many readers, along with several of our friends and colleagues, have praised us for airing our differences in so civil a fashion—the implication being that religious demagogues would have declared mutual fatwas and shed each other’s blood. Well, that is a pretty low bar, and I don’t think we should be congratulated for having cleared it. The truth is that you and I could have done a much better job—and produced something well worth reading—had we explored the topic of free will in a proper conversation. Whether we called it a “conversation” or a “debate” would have been immaterial. And, as you know, I urged you to engage me that way on multiple occasions and up to the eleventh hour. But you insisted upon writing your review. Perhaps you thought that I was hoping to spare myself a proper defenestration. Not so. I was hoping to spare our readers a feeling of boredom that surpasseth all understanding.

As I expected, our exchange will now be far less interesting or useful than a conversation/debate would have been. Trading 10,000-word essays is simply not the best way to get to the bottom of things. If I attempt to correct every faulty inference and misrepresentation in your review, the result will be deadly to read. Nor will you be able to correct my missteps, as you could have if we were exchanging 500-word volleys. I could heap misconception upon irrelevancy for pages—as you have done—and there would be no way to stop me. In the end, our readers will be left to reconcile a book-length catalogue of discrepancies.

Let me give you an example, just to illustrate how tedious it is to untie these knots. You quote me as saying:

If determinism is true, the future is set—and this includes all our future states of mind and our subsequent behavior. And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens.  There is no combination of these truths that seems compatible with the popular notion of free will.

You then announce that “the sentence about indeterminism is false”—a point you seek to prove by recourse to an old thought experiment involving a “space pirate” and a machine that amplifies quantum indeterminacy. After which, you lovingly inscribe the following epitaph onto my gravestone:

These are not new ideas. For instance I have defended them explicitly in 1978, 1984, and 2003. I wish Harris had noticed that he contradicts them here, and I’m curious to learn how he proposes to counter my arguments.

You see, dear reader, Harris hasn’t done his homework. What a pity…. But you have simply misread me, Dan—and that entire page in your review was a useless digression. I am not saying that the mere addition of indeterminism to the clockwork makes responsibility impossible. I am saying, as you have always conceded, that seeking to ground free will in indeterminism is hopeless, because truly random processes are precisely those for which we can take no responsibility. Yes, we might still express our beliefs and opinions while being gently buffeted by random events (as you show in your thought experiment), but if our beliefs and opinions were themselves randomly generated, this would offer no basis for human responsibility (much less free will). Bored yet?

You do this again and again in your review. And when you are not misreading me, you construct bad analogies—to sunsets, color vision, automobiles—none of which accomplish their intended purpose. Some are simply faulty (that is, they don’t run through); others make my point for me, demonstrating that you have missed my point (or, somehow, your own). Consider what you say about sunsets to show that free will should not be considered an illusion:

After all, most people used to believe the sun went around the earth. They were wrong, and it took some heavy lifting to convince them of this. Maybe this factoid is a reflection on how much work science and philosophy still have to do to give everyday laypeople a sound concept of free will…. When we found out that the sun does not revolve around the earth, we didn’t then insist that there is no such thing as the sun (because what the folk mean by “sun” is “that bright thing that goes around the earth”). Now that we understand what sunsets are, we don’t call them illusions. They are real phenomena that can mislead the naive.

Of course, the sun isn’t an illusion, but geocentrism is. Our native sense that the sun revolves around a stationary Earth is simply mistaken. And any “project of sympathetic reconstruction” (your compatibilism) with regard to this illusion would be just a failure to speak plainly about the facts. I have never disputed that mental phenomena such as thoughts, efforts, volition, reasoning, and so forth exist. These are the many “suns” of the mind that any scientific theory must conserve (modulo some clarifying surprises, as has happened for the concept of “memory”). But free will is like the geocentric illusion: It is the very thing that gets obliterated once we begin speaking in detail about the origins of our thoughts and actions. You’re not just begging the question here; you’re begging it with a sloppy analogy. The same holds for your reference to color vision (which I discussed in a previous essay).

And when you are not causing problems with your own analogies, you are distorting mine. For instance, you write that you were especially dismayed by the cover of my book, which depicts a puppet theater. This cover image is justified because I argue that each of us is moved by chance and necessity, just as a marionette is set dancing on its strings. But I never suggest that this is the same as being manipulated by a human puppeteer who overrides our actual beliefs and desires and obliges us to behave in ways we do not intend. You seem eager to draw this implication, however, and so you press on with an irrelevant discussion of game theory (another area in which you allege I haven’t done my homework). Again, I am left wishing we had had a conversation that would have prevented so many pedantic digressions.

In any case, I cannot bear to write a long essay that consists in my repeatedly taking your foot out of my mouth. Instead, I will do my best to drive to the core of our disagreement.


Let’s begin by noticing a few things we actually agree about: We agree that human thought and behavior are determined by prior states of the universe and its laws—and that any contributions of indeterminism are completely irrelevant to the question of free will. We also agree that our thoughts and actions in the present influence how we think and act in the future. We both acknowledge that people can change, acquire skills, and become better equipped to get what they want out of life. We know that there is a difference between a morally healthy person and a psychopath, as well as between one who is motivated and disciplined, and thus able to accomplish his aims, and one who suffers a terminal case of apathy or weakness of will. We both understand that planning and reasoning guide human behavior in innumerable ways and that an ability to follow plans and to be responsive to reasons is part of what makes us human. We agree about so many things, in fact, that at one point you brand me “a compatibilist in everything but name.” Of course, you can’t really mean this, because you go on to write as though I were oblivious to most of what human beings manage to accomplish. At some points you say that I’ve thrown the baby out with the bath; at others you merely complain that I won’t call this baby by the right name (“free will”). Which is it?

However, it seems to me that we do diverge at two points:

1. You think that compatibilists like yourself have purified the concept of free will by “deliberately using cleaned-up, demystified substitutes for the folk concepts.” I believe that you have changed the subject and are now ignoring the very phenomenon we should be talking about—the common, felt sense that I/he/she/you could have done otherwise (generally known as “libertarian” or “contra-causal” free will), with all its moral implications. The legitimacy of your attempting to make free will “presentable” by performing conceptual surgery on it is our main point of contention. Whether or not I can convince you of the speciousness of the compatibilist project, I hope we can agree in the abstract that there is a difference between thinking more clearly about a phenomenon and (wittingly or unwittingly) thinking about something else. I intend to show that you are doing the latter.

2. You believe that determinism at the microscopic level (as in the case of Austin’s missing his putt) is irrelevant to the question of human freedom and responsibility. I agree that it is irrelevant for many things we care about (it doesn’t obviate the distinction between voluntary and involuntary behavior, for instance), but it isn’t irrelevant in the way you suggest. And accepting incompatibilism has important intellectual and moral consequences that you ignore—the most important being, in my view, that it renders hatred patently irrational (while leaving love unscathed). If one is concerned about the consequences of maintaining a philosophical position, as I know you are, helping to close the door on human hatred seems far more beneficial than merely tinkering with a popular illusion.

Changing the Subject

We both know that the libertarian notion of free will makes no scientific or logical sense; you just doubt whether it is widespread among the folk—or you hope that it isn’t, or don’t much care whether it is (in truth, you are not very clear on this point). In defense of your insouciance, you cite a paper by Nahmias et al.[1]  It probably won’t surprise you that I wasn’t as impressed by this research as you were. Nahmias and his coauthors repeatedly worry that their experimental subjects didn’t really understand the implications of determinism—and on my reading, they had good reason to be concerned. In fact, this is one of those rare papers in which the perfunctory doubts the authors raise, simply to show that they have thought of everything, turn out to be far more compelling than their own interpretations of their data. More than anything, this research suggests that people find the idea of libertarian free will so intuitively compelling that it is very difficult to get them to think clearly about determinism. Of course, I agree that what people think and feel is an empirical question. But it seems to me that we know much more about the popular view of free will than you and Nahmias let on.

It is worth noting that the most common objection I’ve heard to my position on free will is some version of the following:

If there is no free will, why write books or try to convince anyone of anything? People will believe whatever they believe. They have no choice! Your position on free will is, therefore, self-refuting. The fact that you are trying to convince people of the truth of your argument proves that you think they have the very freedom that you deny them.

Granted, some confusion between determinism and fatalism (which you and I have both warned against) is probably operating here, but comments of this kind also suggest that people think they have control over what they believe, as if the experience of being convinced by evidence and argument were voluntary. Perhaps such people also believe that they have decided to obey the law of gravity rather than fly around at their pleasure—but I doubt it. An illusion about mental freedom seems to be very widespread. My argument is that such freedom is incompatible with any form of causation (deterministic or otherwise)—which, as you know, is not a novel view. But I also argue that it is incompatible with the actual character of our subjective experience. That is why I say that the illusion of free will is itself an illusion—which is another way of saying that if one really pays attention (and this is difficult), the illusion of free will disappears.

The popular, folk psychological sense of free will is a moment-to-moment experience, not a theory about the mind. It is primarily a first-person fact, not a third-person account of how human beings function. This distinction between first-person and third-person views was what I was trying to get at in the passage that seems to have mystified you (“I have thought long and hard about this passage, and I am still not sure I understand it…”) Everyone has a third-person picture of the human mind—some of us speak of neural circuits, some speak of souls—but the philosophical problem of free will arises from the fact that most people feel that they author their own thoughts and actions. It is very difficult to say what this feeling consists of or to untangle it from working memory, volition, motor planning, and the rest of what our minds are up to—but there can be little doubt that most people feel that they are the conscious source of their own thoughts and actions. Of course, you may wish to deny this very assertion, or believe it more parsimonious to say that we just don’t know how most people feel—and that might be a point worth discussing. But rather than deny the claim, you simply lose sight of it—shifting from first-person experiences to third-person accounts of phenomena that lie outside consciousness.

It is true, of course, that most educated people believe the whole brain is involved in making them who they are (indeed, you and I both believe this). But they experience only some of what goes on inside their brains. Contrary to what you suggest, I was not advancing a Cartesian theory of consciousness (a third-person view), or any “daft doctrine of [my] own devising.” I was simply drawing a line between what people experience and what they don’t (first-person). The moment you show that a person’s thoughts and actions were determined by events that he did not and could not see, feel, or anticipate, his third-person account of himself may remain unchanged (“Of course, I know that much of what goes on in my brain is unconscious, determined by genes, and so forth. So what?”), but his first-person sense of autonomy comes under immediate pressure—provided he is paying attention. As a matter of experience (first-person), there is a difference between being conscious of something and not being conscious of it. And if what a person is unconscious of are the antecedent causes of everything he thinks and does, this fact makes a mockery of the subjective freedom he feels he has. It is not enough at that point for him to simply declare theoretically (third-person) that these antecedent causes are “also me.”

Average Joe feels that he has free will (first-person) and doesn’t like to be told that it is an illusion. I say it is: Consider all the roots of your behavior that you cannot see or feel (first-person), cannot control (first-person), and did not summon into existence (first-person). You say: Nonsense! Average Joe contains all these causes. He is his genes and neurons too (third-person). This is where you put the rabbit in the hat.

Imagine that we live in a world where more or less everyone believes in the lost kingdom of Atlantis. You and your fellow compatibilists come along and offer comfort: Atlantis is real, you say. It is, in fact, the island of Sicily. You then go on to argue that Sicily answers to most of the claims people through the ages have made about Atlantis. Of course, not every popular notion survives this translation, because some beliefs about Atlantis are quite crazy, but those that really matter—or should matter, on your account—are easily mapped onto what is, in fact, the largest island in the Mediterranean. Your work is done, and now you insist that we spend the rest of our time and energy investigating the wonders of Sicily. 

The truth, however, is that much of what causes people to be so enamored of Atlantis—in particular, the idea that an advanced civilization disappeared underwater—can’t be squared with our understanding of Sicily or any other spot on earth. So people are confused, and I believe that their confusion has very real consequences. But you rarely acknowledge the ways in which Sicily isn’t like Atlantis, and you don’t appear interested when those differences become morally salient. This is what strikes me as wrongheaded about your approach to free will.

For instance, ordinary people want to feel philosophically justified in hating evildoers and viewing them as the ultimate cause of their evil. This moral attitude is always vulnerable to our getting more information about causation—and in situations where the underlying causes of a person’s behavior become too clear, our feelings about their responsibility begin to shift. This is why I wrote that fully understanding the brain of a normal person would be analogous to finding an exculpatory tumor in it. I am not claiming that there is no difference between a normal person and one with impaired self-control. The former will be responsive to certain incentives and punishments, and the latter won’t. (And that is all the justification we need to resort to carrots and sticks or to lock incorrigibly dangerous people away forever.) But something in our moral attitude does change when we catch sight of these antecedent causes—and it should change. We should admit that a person is unlucky to be given the genes and life experience that doom him to psychopathy. Again, that doesn’t mean we can’t lock him up. But hating him is not rational, given a complete understanding of how he came to be who he is. Natural, yes; rational, no. Feeling compassion for him could be rational, however—in the same way that we could feel compassion for the six-year-old boy who was destined to become Jeffrey Dahmer. And while you scoff at “medicalizing” human evil, a complete understanding of the brain would do just that. Punishment is an extraordinarily blunt instrument. We need it because we understand so little about the brain, and our ability to influence it is limited. However, imagine that two hundred years in the future we really know what makes a person tick; Procrustean punishments won’t make practical sense, and they won’t make moral sense either. But you seem committed to the idea that certain people might actually deserve to be punished—if not precisely for the reasons that common folk imagine, nevertheless for reasons that have little or nothing to do with the good consequences that such punishments might have, all things considered. In other words, your compatibilism seems an attempt to justify the conventional notion of blame, which my view denies. This is a difference worth focusing on.

Let’s examine Austin’s example of his missed putt:

Consider the case where I miss a very short putt and kick myself because I could have holed it. It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were, and asserting that I could have holed it. There is the rub. Nor does ‘I can hole it this time’ mean that I shall hole it this time if I try or if anything else; for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time, although I did not. (J.L. Austin. 1961. “Ifs and Cans,” in Austin, Philosophical Papers, edited by J. Urmson and G. Warnock. Oxford, Clarendon Press.)

This is a good place to start, because you say the following in your review:

I consider Austin’s mistake to be the central core of the ongoing confusion about free will; if you look at the large and intricate philosophical literature about incompatibilism, you will see that just about everyone assumes, without argument, that it is not a mistake.

I am happy to take the bait. I see no problem with using Austin’s example to support incompatibilism. I should emphasize, however, that I am discussing only the implications of Austin’s point for an account of free will, not how it functions in his original essay (which, as you know, was an analysis of the relationship between the terms “if” and “can,” not a sustained argument against free will).[2]

Let’s make sure you and I are standing on the same green: We agree that a human being, whatever his talents, training, and aspirations, will think, intend, and behave exactly as he does given the totality of conditions in the moment. That is, whatever his ability as a golfer, Austin would miss that same putt a trillion times in a row—provided that every atom and charge in the universe was exactly as it had been the first time he missed it. You think this fact (we can call it determinism, as you do, but it includes the contributions of indeterminism as well, provided they remain the same[3]) says nothing about free will. You think the fact that Austin could make nearly identical putts in other moments—with his brain in a slightly different state, the green a touch slower, and so forth—is all that matters. I agree that it is what matters when assessing his abilities as a golfer: Here, we don’t care about the single NMDA receptor that screwed up his swing on one particular putt; we care about the statistics of his play, round after round. But to speak clearly and honestly (that is, scientifically) about the actual causes of what happens in the world in each moment, we must focus on the particular.

What are we really saying when we claim that Austin could have made that putt (the one he missed)? As you point out, we aren’t actually referencing that putt at all. We are saying that Austin has made similar putts in the past and we can count on him to do so in the future—provided that he tries, doesn’t suffer some neurological injury, and so forth. However, we are also saying that Austin would have made this putt had something not gotten in his way. He had the general ability, after all, so something went wrong.

Then why did Austin miss his putt? Because some condition necessary for his making it was absent. What if that condition was sufficient effort, of the sort that he was generally capable of making? Why didn’t he make that effort? The answer is the same: Because some condition necessary for his making it was absent. From a scientific perspective, his failure to try is just another missed putt. Austin tried precisely as hard as he did. Next time he might try harder. But this time—with the universe and his brain exactly as they were—he couldn’t have tried in any other way.

To say that Austin really should have made that putt or tried harder is just a way of admonishing him to put forth greater effort in the future. We are not offering an account of what actually happened (his failure to sink his putt or his failure to try). You and I agree that such admonishments have effects and that these effects are perfectly in harmony with the truth of determinism. There is, in fact, nothing about incompatibilism that prevents us from urging one another (and ourselves) to do things differently in the future, or from recognizing that such exhortations often work. The things we say to one another (and to ourselves) are simply part of the chain of causes that determine how we think and behave.

But can we blame Austin for missing his putt? No. Can we blame him for not trying hard enough? Again, the answer is no—unless blaming him were just a way of admonishing him to try harder in the future. For us to consider him truly responsible for missing the putt or for failing to try, we would need to know that he could have acted other than he did. Yes, there are two readings of “could”—and you find only one of them interesting. But they are both interesting, and the one you ignore is morally consequential. One reading refers to a person’s (or a car’s, in your example) general capacities. Could Austin have sunk his putt, as a general matter, in similar situations? Yes. Could my car go 80 miles per hour, though I happen to be driving at 40? Yes. The other reading is one you consider to be a red herring. Could Austin have sunk that very putt, the one he missed? No. Could he have tried harder? No. His failure on both counts was determined by the state of the universe (especially his nervous system). Of course, it isn’t irrational to treat him as someone who has the general ability to make putts of that sort, and to urge him to try harder in the future—and it would be irrational to admonish a person who lacked such ability. You are right to believe that this distinction has important moral implications: Do we demand that mosquitoes and sharks behave better than they do? No. We simply take steps to protect ourselves from them. The same futility prevails with certain people—psychopaths and others whom we might deem morally insane. It makes sense to treat people who have the general capacity to behave well but occasionally lapse differently from those who have no such capacity and on whom any admonishment would be wasted. You are right to think that these distinctions do not depend on “absolute free will.” But this doesn’t mean nothing changes once we realize that a person could never have made the putt he just missed, or tried harder than he did, or refrained from killing his neighbor with a hammer.

Holding people responsible for their past actions makes no sense apart from the effects that doing so will have on them and the rest of society in the future (e.g. deterrence, rehabilitation, keeping dangerous people off our streets). The notion of moral responsibility, therefore, is forward-looking. But it is also paradoxical. People who have the most ability (self-control, opportunity, knowledge, etc.) would seem to be the most blameworthy when they fail or misbehave. For instance, when Tiger Woods misses a three-foot putt, there is a much greater temptation to say that he really should have made it than there is in the case of an average golfer. But Woods’s failure is actually more anomalous. Something must have gone wrong if a person of his ability missed so easy a putt. And he wouldn’t stand to benefit (much) from being admonished to try harder in the future. So in some ways, holding a person responsible for his failures seems to make even less sense the more worthy of responsibility he becomes in the conventional sense.

We agree that given past states of the universe and its laws, we can only do what we in fact do, and not do otherwise. You don’t think this truth has many psychological or moral consequences. In fact, you describe the lawful propagation of certain types of events as a form of “freedom.” But consider the emergence of this freedom in any specific human being: It is fully determined by genes and the environment (add as much randomness as you like). Imagine the first moment it comes online—in, say, the brain of a four-year-old child. Consider this first, “free” increment of executive control to emerge from the clockwork. It will emerge precisely to the degree that it does, and when, according to causes that have nothing to do with this person’s freedom. And it will perpetuate its effects on future states of his nervous system in total conformity to natural laws. In each instant, Austin will make his putt or miss it; and he will try his best or not. Yes, he is “free” to do whatever it is he does based on past states of the universe. But the same could be said of a chimp or a computer—or, indeed, a line of dominoes. Perhaps such mechanical equivalences don’t bother you, but they might come as a shock to those who think that you have rescued their felt sense of autonomy from the gears of determinism.

In your review, you called my book a “political tract.” The irony is that your argument against incompatibilism seems almost entirely political. At times you write as though nothing is at stake apart from the future of the terms free will and compatibilism. More generally, however, you seem to think that the consequences of taking incompatibilism seriously will be pernicious:

If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do.  Preserving “law and order” without a concept of real responsibility is a daunting task.

These concerns, while not irrational, have nothing to do with the philosophical or scientific merits of the case. They also arise out of a failure to understand the practical consequences of my view. I am no more inclined to release dangerous criminals back onto our streets than you are.

In my book, I argue that an honest look at the causal underpinnings of human behavior, as well as at one’s own moment-to-moment experience, reveals free will to be an illusion. (I would say the same about the conventional sense of “self,” but that requires more discussion, and it is the topic of my next book.) I also claim that this fact has consequences—good ones, for the most part—and that is another reason it is worth exploring. But I have not argued for my position primarily out of concern for the consequences of accepting it. And I believe you have.

Of course, I can’t quite blame you for missing that putt, Dan. But I can admonish you to be more careful in the future.


NOTES

  1. E. Nahmias, S. Morris, T. Nadelhoffer & J. Turner. 2005. “Surveying Freedom: Folk Intuitions about Free Will and Moral Responsibility,” Philosophical Psychology, 18, pp. 561–584.
  2. Reading the rest of Austin’s “notorious” footnote, I’m not sure he made the mistake you attribute to him. The very next sentence following your partial quotation reads, “But if I tried my hardest, say, and missed, surely there must have been something that caused me to fail, that made me unable to succeed? So that I could not have holed it.” To my eye, this closes the door on his alleged confusion about what subsequent experiments would have shown.
  3. You consistently label me a “hard determinist” which is a little misleading. The truth is that I am agnostic as to whether determinism is strictly true (though it must be approximately true, as far as human beings are concerned). Insofar as it is, free will is impossible. But indeterminism offers no relief. My actual view is that free will is conceptually incoherent and both subjectively and objectively nonexistent. Causation, whether deterministic or random, offers no basis for free will.

 

Author: "--" Tags: "Free Will, Consciousness, Neuroscience, ..."
Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 22:39

Waking Up Sam Harris

My next book, Waking Up: A Guide to Spirituality Without Religion, will be published by Simon & Schuster in September. This is the third cover that David Drummond has created for me (along with those for Lying and Free Will). Great job, David!

Author: "--" Tags: "Announcements, Book News, Publishing,"
Send by mail Print  Save  Delicious 
Date: Sunday, 26 Jan 2014 21:58

(Photo via Steven Kersting)

Daniel Dennett and I agree about many things, but we do not agree about free will. Dan has been threatening to set me straight on this topic for several years now, and I have always encouraged him to do so, preferably in public and in writing. He has finally produced a review of my book Free Will that is nearly as long as the book itself. I am grateful to Dan for taking the time to engage me this fully, and I will respond in the coming weeks.—SH

Daniel C. Dennett is the Austin B. Fletcher Professor of Philosophy, and Co-Director of the Center for Cognitive Studies at Tufts University. He is the author of Breaking the Spell, Freedom Evolves, Darwin’s Dangerous Idea, Consciousness Explained, and many other books. He has received two Guggenheim Fellowships, a Fulbright Fellowship, and a Fellowship at the Center for Advanced Studies in Behavioral Science. He was elected to the American Academy of Arts and Sciences in 1987. His latest book, written with Linda LaScola, Caught in the Pulpit: Leaving Belief Behind.

This essay was first published at Naturalism.org and has been crossposted here with permission.

*  *  *



Sam Harris’s Free Will (2012)  is a remarkable little book, engagingly written and jargon-free, appealing to reason, not authority, and written with passion and moral seriousness. This is not an ivory tower technical inquiry; it is in effect a political tract, designed to persuade us all to abandon what he considers to be a morally pernicious idea: the idea of free will.  If you are one of the many who have been brainwashed into believing that you have—or rather, are—an (immortal, immaterial) soul who makes all your decisions independently of the causes impinging on your material body and especially your brain, then this is the book for you. Or, if you have dismissed dualism but think that what you are is a conscious (but material) ego, a witness that inhabits a nook in your brain and chooses, independently of external causation, all your voluntary acts, again, this book is for you. It is a fine “antidote,” as Paul Bloom says, to this incoherent and socially malignant illusion. The incoherence of the illusion has been demonstrated time and again in rather technical work by philosophers (in spite of still finding supporters in the profession), but Harris does a fine job of making this apparently unpalatable fact accessible to lay people. Its malignance is due to its fostering the idea of Absolute Responsibility, with its attendant implications of what we might call Guilt-in-the-eyes-of-God for the unfortunate sinners amongst us and, for the fortunate, the arrogant and self-deluded idea of Ultimate Authorship of the good we do. We take too much blame, and too much credit, Harris argues. We, and the rest of the world, would be a lot better off if we took ourselves—our selves—less seriously. We don’t have the kind of free will that would ground such Absolute Responsibility for either the harm or the good we cause in our lives.

All this is laudable and right, and vividly presented, and Harris does a particularly good job getting readers to introspect on their own decision-making and notice that it just does not conform to the fantasies of this all too traditional understanding of how we think and act.  But some of us have long recognized these points and gone on to adopt more reasonable, more empirically sound, models of decision and thought, and we think we can articulate and defend a more sophisticated model of free will that is not only consistent with neuroscience and introspection but also grounds a (modified, toned-down, non-Absolute) variety of responsibility that justifies both praise and blame, reward and punishment. We don’t think this variety of free will is an illusion at all, but rather a robust feature of our psychology and a reliable part of the foundations of morality, law and society. Harris, we think, is throwing out the baby with the bathwater.

He is not alone among scientists in coming to the conclusion that the ancient idea of free will is not just confused but also a major obstacle to social reform. His brief essay is, however, the most sustained attempt to develop this theme, which can also be found in remarks and essays by such heavyweight scientists as the neuroscientists Wolf Singer and Chris Frith, the psychologists Steven Pinker and Paul Bloom, the physicists Stephen Hawking and Albert Einstein, and the evolutionary biologists Jerry Coyne and (when he’s not thinking carefully) Richard Dawkins.

The book is, thus, valuable as a compact and compelling expression of an opinion widely shared by eminent scientists these days. It is also valuable, as I will show, as a veritable museum of mistakes, none of them new and all of them seductive—alluring enough to lull the critical faculties of this host of brilliant thinkers who do not make a profession of thinking about free will.  And, to be sure, these mistakes have also been made, sometimes for centuries, by philosophers themselves.  But I think we have made some progress in philosophy of late, and Harris and others need to do their homework if they want to engage with the best thought on the topic.

I am not being disingenuous when I say this museum of mistakes is valuable; I am grateful to Harris for saying, so boldly and clearly, what less outgoing scientists are thinking but keeping to themselves. I have always suspected that many who hold this hard determinist view are making these mistakes, but we mustn’t put words in people’s mouths, and now Harris has done us a great service by articulating the points explicitly, and the chorus of approval he has received from scientists goes a long way to confirming that they have been making these mistakes all along.  Wolfgang Pauli’s famous dismissal of another physicist’s work as “not even wrong” reminds us of the value of crystallizing an ambient cloud of hunches into something that can be shown to be wrong.  Correcting widespread misunderstanding is usually the work of many hands, and Harris has made a significant contribution.

The first parting of opinion on free will is between compatibilists and incompatibilists. The latter say (with “common sense” and a tradition going back more than two millennia) that free will is incompatible with determinism, the scientific thesis that there are causes for everything that happens. Incompatibilists hold that unless there are “random swerves”[1] that disrupt the iron chains of physical causation, none of our decisions or choices can be truly free. Being caused means not being free—what could be more obvious?  The compatibilists deny this; they have argued, for centuries if not millennia, that once you understand what free will really is (and must be, to sustain our sense of moral responsibility), you will see that free will can live comfortably with determinism—if determinism is what science eventually settles on.

Incompatibilists thus tend to pin their hopes on indeterminism, and hence were much cheered by the emergence of quantum indeterminism in 20th century physics. Perhaps the brain can avail itself of undetermined quantum swerves at the sub-atomic level, and thus escape the shackles of physical law!  Or perhaps there is some other way our choices could be truly undetermined. Some have gone so far as to posit an otherwise unknown (and almost entirely unanalyzable) phenomenon called agent causation, in which free choices are caused somehow by an agent, but not by any event in the agent’s history. One exponent of this position, Roderick Chisholm, candidly acknowledged that on this view every free choice is “a little miracle”—which makes it clear enough why this is a school of thought endorsed primarily by deeply religious philosophers and shunned by almost everyone else.  Incompatibilists who think we have free will, and therefore determinism must be false, are known as libertarians (which has nothing to do with the political view of the same name). Incompatibilists who think that all human choices are determined by prior events in their brains (which were themselves no doubt determined by chains of events arising out of the distant past) conclude from this that we can’t have free will, and, hence, are not responsible for our actions.

This concern for varieties of indeterminism is misplaced, argue the compatibilists: free will is a phenomenon that requires neither determinism nor indeterminism; the solution to the problem of free will lies in realizing this, not banking on the quantum physicists to come through with the right physics—or a miracle.  Compatibilism may seem incredible on its face, or desperately contrived, some kind of a trick with words, but not to philosophers. Compatibilism is the reigning view among philosophers (just over 59%, according to the 2009 Philpapers survey) with libertarians coming second with 13% and hard determinists only 12%.  It is striking, then, that all the scientists just cited have landed on the position rejected by almost nine out of ten philosophers, but not so surprising when one considers that these scientists hardly ever consider the compatibilist view or the reasons in its favor.

Harris has considered compatibilism, at least cursorily, and his opinion of it is breathtakingly dismissive: After acknowledging that it is the prevailing view among philosophers (including his friend Daniel Dennett), he asserts that “More than in any other area of academic philosophy, the result resembles theology.”  This is a low blow, and worse follows: “From both a moral and a scientific perspective, this seems deliberately obtuse.” (18)  I would hope that Harris would pause at this point to wonder—just wonder—whether maybe his philosophical colleagues had seen some points that had somehow escaped him in his canvassing of compatibilism.  As I tell my undergraduate students, whenever they encounter in their required reading a claim or argument that seems just plain stupid, they should probably double check to make sure they are not misreading the “preposterous” passage in question. It is possible that they have uncovered a howling error that has somehow gone unnoticed by the profession for generations, but not very likely.  In this instance, the chances that Harris has underestimated and misinterpreted compatibilism seem particularly good, since the points he defends later in the book agree right down the line with compatibilism; he himself is a compatibilist in everything but name!

Seriously, his main objection to compatibilism, issued several times, is that what compatibilists mean by “free will” is not what everyday folk mean by “free will.” Everyday folk mean something demonstrably preposterous, but Harris sees the effort by compatibilists to make the folks’ hopeless concept of free will presentable as somehow disingenuous, unmotivated spin-doctoring, not the project of sympathetic reconstruction the compatibilists take themselves to be engaged in. So it all comes down to who gets to decide how to use the term “free will.” Harris is a compatibilist about moral responsibility and the importance of the distinction between voluntary and involuntary actions, but he is not a compatibilist about free will since he thinks “free will” has to be given the incoherent sense that emerges from uncritical reflection by everyday folk. He sees quite well that compatibilism is “the only philosophically respectable way to endorse free will” (p. 16) but adds:

However, the ‘free will’ that compatibilists defend is not the free will that most people feel they have. (p. 16)

First of all, he doesn’t know this. This is a guess, and suitably expressed questionnaires might well prove him wrong. That is an empirical question, and a thoughtful pioneering attempt to answer it suggests that Harris’s guess is simply mistaken.[2] The newly emerging field of experimental philosophy (or “X-phi”) has a rather unprepossessing track record to date, but these are early days, and some of the work has yielded interesting results that certainly defy complacent assumptions common among philosophers.  The study by Nahmias et al. 2005 found substantial majorities (between 60 and 80%) in agreement with propositions that are compatibilist in outlook, not incompatibilist.

Harris’s claim that the folk are mostly incompatibilists is thus dubious on its face, and even if it is true, maybe all this shows is that most people are suffering from a sort of illusion that could be replaced by wisdom. After all, most people used to believe the sun went around the earth. They were wrong, and it took some heavy lifting to convince them of this.  Maybe this factoid is a reflection on how much work science and philosophy still have to do to give everyday laypeople a sound concept of free will. We’ve not yet succeeded in getting them to see the difference between weight and mass, and Einsteinian relativity still eludes most people.  When we found out that the sun does not revolve around the earth, we didn’t then insist that there is no such thing as the sun (because what the folk mean by “sun” is “that bright thing that goes around the earth”).  Now that we understand what sunsets are, we don’t call them illusions. They are real phenomena that can mislead the naive.

To see the context in which Harris’s criticism plays out, consider a parallel. The folk concept of mind is a shambles, for sure: dualistic,  scientifically misinformed and replete with miraculous features—even before we get to ESP and psychokinesis and poltergeists.  So when social scientists talk about beliefs or desires and cognitive neuroscientists talk about attention and memory they are deliberately using cleaned-up, demystified substitutes for the folk concepts. Is this theology, is this deliberately obtuse, countenancing the use of concepts with such disreputable ancestors?  I think not, but the case can be made (there are mad dog reductionist neuroscientists and philosophers who insist that minds are illusions, pains are illusions, dreams are illusions, ideas are illusions—all there is is just neurons and glia and the like). The same could be said about color, for example. What everyday folk think colors are—if you pushed them beyond their everyday contexts in the paint store and picking out their clothes—is hugely deluded; that doesn’t mean that colors are an illusion. They are real in spite of the fact that, for instance, atoms aren’t colored.

Here are some more instances of Harris’s move:

We do not have the freedom we think we have.  (p. 5)

Who’s we?  Maybe many people, maybe most, think that they have a kind of freedom that they don’t and can’t have. But that settles nothing.  There may be other, better kinds of freedom that people also think they have, and that are worth wanting (Dennett, 1984).

We do not know what we intend to do until the intention itself arises.  [True, but so what?]  To understand this is to realize that we are not the authors of our thoughts and actions in the way that people generally suppose [my italics]. (p. 13)

Again, so what? Maybe we are authors of our thoughts and actions in a slightly different way.  Harris doesn’t even consider that possibility (since that would require taking compatibilist “theology” seriously).

If determinism is true, the future is set—and this includes all our future states of mind and our subsequent behavior.  And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens.  There is no combination of these truths that seem compatible with the popular notion of free will [my italics].  (p. 30)

Again, the popular notion of free will is a mess; we knew that long before Harris sat down to write his book.  He needs to go after the attempted improvements,  and it cannot be part of his criticism that they are not the popular notion.

There is also another problem with this paragraph: the sentence about indeterminism is false:

And to the extent that the law of cause and effect is subject to indeterminism—quantum or otherwise—we can take no credit for what happens.

Here is a counterexample, contrived, but highlighting the way indeterminism could infect our actions and still leave us responsible (a variant of an old—1978—counterexample of mine):

You must correctly answer three questions to save the world from a space pirate, who provides you with a special answering gadget. It has two buttons marked YES and NO and two foot pedals marked YES and NO.  A sign on the gadget lights up after every question “Use the buttons” or “Use the pedals.” You are asked “is Chicago the capital of Illinois?”, the sign says “Use the buttons” and you press the No button with your finger. Then you are asked “Are Dugongs mammals?”, the sign says “Use the buttons” and you press the Yes button with your finger. Finally you are asked “Are proteins made of amino acids?” and the sign says “Use the pedals” so you reach out with your foot and press the Yes pedal. A roar of gratitude goes up from the crowd. You’ve saved the world, thanks to your knowledge and responsible action! But all three actions were unpredictable by Laplace’s demon because whether the light said “Button” or “Pedals” was caused by a quantum random event.  In a less obvious way, random perturbations could infect (without negating) your every deed. The tone of your voice when you give your evidence could be tweaked up or done, the pressure of your trigger finger as you pull the trigger could be tweaked greater or lesser, and so forth, without robbing you of responsibility. Brains are, in all likelihood, designed by natural selection to absorb random fluctuations without being seriously diverted by them—just as computers are. But that means that randomness need not destroy the rationality, the well-governedness, the sense-making integrity of your control system. Your brain may even exploit randomness in a variety of ways to enhance its heuristic search for good solutions to problems.

These are not new ideas. For instance I have defended them explicitly in 1978, 1984, and 2003.  I wish Harris had noticed that he contradicts them here, and I’m curious to learn how he proposes to counter my arguments.

Another mistake he falls for—in very good company—is the mistake the great J. L. Austin makes in his notorious footnote about his missed putt. First Austin’s version, and my analysis of the error, and then Harris’s version.

Consider the case where I miss a very short putt and kick myself because I could have holed it.  It is not that I should have holed it if I had tried: I did try, and missed. It is not that I should have holed it if conditions had been different: that might of course be so, but I am talking about conditions as they precisely were [my italics], and asserting that I could have holed it. There is the rub. Nor does ‘I can hole it this time’ mean that I shall hole it this time if I try or if anything else; for I may try and miss, and yet not be convinced that I could not have done it; indeed, further experiments may confirm my belief that I could have done it that time [my italics], although I did not. (Austin 1961: 166. [“Ifs and Cans,” in Austin, Philosophical Papers, edited by J. Urmson and G. Warnock, Oxford, Clarendon Press.])

Austin claims to be talking about conditions as they precisely were, but if so, then further experiments could not confirm his belief. Presumably he has in mind something like this:  he could line up ten “identical” putts on the same green and, say, sink nine out of ten. This would show, would it not, that he could have made that putt?  Yes, to the satisfaction of almost everybody, but No, if he means under conditions “as they precisely were,” for conditions were subtly different in every subsequent putt—the sun a little lower in the sky, the green a little drier or moister, the temperature or wind direction ever so slightly different, Austin himself older and maybe wiser, or maybe more tired, or maybe more relaxed. This variation is not a bug to be eliminated from such experiments, but a feature without which experiments could not show that Austin “could have done otherwise,” and this is precisely the elbow room we need to see that “could have done otherwise” is perfectly compatible with determinism, because it never means, in real life, what philosophers have imagined it means: replay exactly the same “tape” and get a different result.  Not only can such an experiment never be done; if it could, it wouldn’t’ show what needed showing: something about Austin’s ability as a golfer, which, like all abilities, needs to be demonstrated to be robust under variation.

Here is Harris’ version of the same mistake:

To say that they were free not to rape and murder is to say that they could have resisted the impulse to do so (or could have avoided feeling such an impulse altogether)—with the universe, including their brains, in precisely the same state it was in at the moment they committed their crimes.  (p. 17)


Just not true. If we are interested in whether somebody has free will, it is some kind of ability that we want to assess, and you can’t assess any ability by “replaying the tape.” (See my extended argument to this effect in Freedom Evolves, 2003)  The point was made long ago by A. M. Honoré in his classic paper “Can and Can’t,” in Mind, 1964, and more recently deeply grounded in Judea Pearl’s Causality: Models, Reasoning and Inference, [CUP] 2000.  This is as true of the abilities of automobiles as of people. Suppose I am driving along at 60 MPH and am asked if my car can also go 80 MPH. Yes, I reply, but not in precisely the same conditions; I have to press harder on the accelerator.  In fact, I add, it can also go 40 MPH, but not with conditions precisely as they are.  Replay the tape till eternity, and it will never go 40MPH in just these conditions.  So if you want to know whether some rapist/murderer was “free not to rape and murder,” don’t distract yourself with fantasies about determinism and rewinding the tape; rely on the sorts of observations and tests that everyday folk use to confirm and disconfirm their verdicts about who could have done otherwise and who couldn’t.[3]

One of the effects of Harris’s misconstruing compatibilism is that when he turns to the task of avoiding the dire conclusions of the hard determinists, he underestimates his task.[4] At the end of the book, he gets briefly concessive, throwing a few scraps to the opposition:

And it is wise to hold people responsible for their actions when doing so influences their behavior and brings benefit to society.  But this does not mean that we must be taken in by the illusion of free will. We need only acknowledge that efforts matter and that people can change.  We do not change ourselves, precisely—because we have only ourselves with which to do the changing—but we continually influence, and are influenced by, the world around us and the world within us.  It may seem paradoxical to hold people responsible for what happens in their corner of the universe, but once we break the spell of free will, we can do this precisely to the degree that it is useful. Where people can change, we can demand that they do so. Where change is impossible, or unresponsive to demands, we can chart some other course. (p. 63)

Harris should take more seriously the various tensions he sets up in this passage.  It is wise to hold people responsible, he says, even though they are not responsible, not really. But we don’t hold everybody responsible; as he notes, we excuse those who are unresponsive to demands, or in whom change is impossible. That’s an important difference, and it is based on the different abilities or competences that people have.  Some people (are determined to) have the abilities that justify our holding them responsible, and some people (are determined to) lack those abilities. But determinism doesn’t do any work here; in particular it doesn’t disqualify those we hold responsible from occupying that role.  In other words, real responsibility, the kind the everyday folk think they have (if Harris is right), is strictly impossible; but when those same folk wisely and justifiably hold somebody responsible, that isn’t real responsibility![5]

And what is Harris saying about whether we can change ourselves?  He says we can’t change ourselves “precisely” but we can influence (and hence change) others, and they can change us. But then why can’t we change ourselves by getting help from others to change us? Why, for that matter, can’t we do to ourselves what we do to those others, reminding ourselves, admonishing ourselves, reasoning with ourselves?  It does work, not always but enough to make to worth trying.  And notice: if we do things to influence and change others, and thereby turn them into something bad—encouraging their racist or violent tendencies, for instance, or inciting them to commit embezzlement, we may be held responsible for this socially malign action. (Think of the drunk driving laws that now hold the bartender or the party host partly responsible for the damage done.) But then by the same reasoning we can justifiably be held responsible for influencing ourselves, for good or ill. We can take some credit for any improvements we achieve in others—or ourselves—and we can share the blame for any damage we do to others or ourselves.

There are complications with all this, but Harris doesn’t even look at the surface of these issues. For instance, our capacities to influence ourselves are themselves only partly the result of earlier efforts at self-improvement in which we ourselves played a major role. It takes a village to raise a child, as Hilary Clinton has observed. In the end, if we trace back far enough to our infancy or beyond, we arrive at conditions that we were just lucky (or unlucky) to be born with. This undeniable fact is not the disqualifier of responsibility that Harris and others assume. It disqualifies us for “Ultimate” responsibility, which would require us to be—like God!—causa sui, the original cause of ourselves, as Galen Strawson has observed, but this is nonsense. Our lack of Ultimate responsibility is not a moral blemish; if the discovery of this lack motivates some to reform our policies of reward and punishment, that is a good result, but it is hardly compelled by reason.

This emerging idea, that we can justifiably be held to be the authors (if not the Authors) of not only our deeds but the character from which our deeds flow, undercuts much of the rhetoric in Harris’s book. Harris is the author of his book; he is responsible for both its virtues, for which he deserves thanks, and its vices, for which he may justifiably be criticized. But then why can we not generalize this point to Harris himself, and rightly hold him at least partly responsible for his character since it too is a product—with help from others, of course—of his earlier efforts? Suppose he replied that he is not really the author of Free Will.  At what point do we get to use Harris’s criticism against his own claims? Harris might claim that he is not really responsible, isn’t really the author of his own book, isn’t really responsible, but that isn’t what the folk would say. The folk believe in a kind of responsibility that is exemplified by Harris’s authorship.  Harris would have distorted the folk notion of responsibility as much if not more than compatibilists have distorted the folk notion of free will.

Harris opens his book with an example of murderous psychopaths, Hayes and Komisarjevsky, who commit unspeakable atrocities. One has shown remorse, the other reports having been abused as a child.

Whatever their conscious motives, these men cannot know why they are as they are.  Nor can we account for why we are not like them.

Really?  I think we can. The sentence is ambiguous, in fact. Harris knows full well that we can provide detailed and empirically supported accounts of why normal, law-abiding people who would never commit those atrocities emerge by the millions from all sorts of backgrounds, and why these psychopaths are different. But he has a different question in mind: why we—you and I—are in the fortunate, normal class instead of having been doomed to psychopathy. A different issue, but also an irrelevant, merely metaphysical issue.  (Cf. “Why was I born in the 20th century, and not during the Renaissance?  We’ll never know!”)

The rhetorical move here is well-known, but indefensible. If you’re going to raise these horrific cases, it behooves you to consider that they might be cases of pathology, as measured against (moral) health.  Lumping the morally competent with the morally incompetent and then saying “there really is no difference between them, is there?” is a move that needs support, not something that can be done by assumption or innuendo.

I cannot take credit for the fact that I don’t have the soul of a psychopath. (p. 4)

True—and false. Harris can’t take credit for the luck of his birth, his having had a normal moral education—that’s just luck—but those born thus lucky are informed that they have a duty or obligation to preserve their competence, and grow it, and educate themselves, and Harris has responded admirably to those incentives. He can take credit, not Ultimate credit, whatever that might be, but partial credit, for husbanding the resources he was endowed with.  As he says, he is just lucky not to have been born with Komisarjevsky’s genes and life experiences. If he had been, he’d have been Komisarjevsky!

A similar difficulty infects his claim that there is no difference between an act caused by a brain tumor and an act caused by a belief (which is just another brain state, after all).

But a neurological disorder appears to be just a special case of physical events giving rise to thoughts and actions.  Understanding the neurophysiology of the brain, therefore, would seem to be as exculpatory as finding a tumor in it. (p. 5)

Notice the use of “appears” and “seem” here.  Replace them both with “is” and ask if he’s made the case.  (In addition to the “surely”-alarm I recommend all readers install in their brains (2013), a “seems”–alarm will pick up lots of these slippery places where philosophers defer argument where argument is called for.

Even the simplest and most straightforward of Harris’s examples wilt under careful scrutiny:

Did I consciously choose coffee over tea?  No. The choice was made for me by events in my brain that I, as the conscious witness of my thoughts and actions, could not inspect or influence.  (p. 7)

Not so.  He can influence those internal, unconscious actions—by reminding himself, etc. He just can’t influence them at the moment they are having their effect on his choice.  (He also can’t influence the unconscious machinery that determines whether he returns a tennis serve with a lob or a hard backhand once the serve is on its way, but that doesn’t mean his tennis strokes are involuntary or outside his—indirect—control.  At one point he says “If you don’t know what your soul is going to do, you are not in control.” (p. 12) Really?  When you drive a car, are you not in control? You know “your soul” is going to do the right thing, whatever in the instant it turns out to be, and that suffices to demonstrate to you, and the rest of us, that you are in control. Control doesn’t get any more real than that.)

Harris ignores the reflexive, repetitive nature of thinking. My choice at time t can influence my choice at time t’ which can influence my choice at time t”.  How?  My choice at t can have among its effects the biasing of settings in my brain (which I cannot directly inspect) that determine (I use the term deliberately) my choice at t’. I can influence my choice at t’. I influenced it at time t (without “inspecting” it).  Like many before him, Harris shrinks the me to a dimensionless point, “the witness” who is stuck in the Cartesian Theater awaiting the decisions made elsewhere. That is simply a bad theory of consciousness.

I, as the conscious witness of my experience, no more initiate events in my prefrontal cortex than I cause my heart to beat. (p. 9)

If this isn’t pure Cartesianism, I don’t know what it is.  His prefrontal cortex is part of the I in question.  Notice that if we replace the “conscious witness” with “my brain” we turn an apparent truth into an obvious falsehood: “My brain can no more initiate events in my prefrontal cortex than it can cause my heart to beat.”

There are more passages that exhibit this curious tactic of heaping scorn on daft doctrines of his own devising while ignoring reasonable compatibilist versions of the same ideas, but I’ve given enough illustrations, and the rest are readily identifiable once you see the pattern. Harris clearly thinks compatibilism is not worth his attention (so “deliberately obtuse” is it),  but after such an indictment, he better come up with some impressive criticisms. His main case against compatibilism—aside from the points above that I have already criticized—consists of three rhetorical questions lined up in a row (pp. 18-19).  Each one collapses on closer inspection. As I point out in Intuition Pumps and Other Tools for Thinking, rhetorical questions, which are stand-ins for reductio ad absurdum arguments so obvious that they need not be spelled out, should always be scrutinized as likely weak spots in arguments.. I offer Harris’s trio as exhibits A,B, and C:

(A) You want to finish your work, but you are also inclined to stop working so that you can play with your kids.  You aspire to quite smoking, but you also crave another cigarette. You are struggling to save money, but you are also tempted to buy a new computer.  Where is the freedom when one of these opposing desires inexplicably [my italics] triumphs over its rival?

But no compatibilist has claimed (so far as I know) that our free will is absolute and trouble-free. On the contrary there is a sizable and fascinating literature on the importance of the various well-known ways in which we respond to such looming cases of “weakness of will,” from which we all suffer. When one desire triumphs, this is not usually utterly inexplicable, but rather the confirmable result of efforts of self-manipulation and self-education, based on empirical self-exploration.  We learn something about what makes us tick—not usually in neuroscientific terms, but rather in terms of folk psychology—and design a strategy to correct the blind spots we find, the biases we identify. That practice undeniably occurs, and undeniably works to a certain extent. We can improve our self-control, and this is a morally significant fact about the competence of normal adults—the only people whom we hold fully (but not “absolutely” or “deeply”) responsible.  Remove the word “inexplicably” from exhibit A and the rhetorical question has a perfectly good answer: in many cases our freedom is an achievement, for which we are partly responsible. (Yes, luck plays a role but so does skill; we are not just lucky. (Dennett, 1984)

(B) The problem for compatibiism runs deeper, however—for where is the freedom in wanting what one wants without any internal conflict whatsoever?

To answer a rhetorical question with another,  so long as one can get what one wants so wholeheartedly, what could be better?  What could be more freedom than that?  Any realistic, reasonable account of free will acknowledges that we are stuck with some of our desires: for food and comfort and love and absence of pain—and the freedom to do what we want.  We can’t not want these, or if we somehow succeed in getting ourselves into such a sorry state, we are pathological. These are the healthy, normal, sound, wise desires on which all others must rest.  So banish the fantasy of any account of free will that is screwed so tight it demands that we aren’t free unless all our desires and meta-desires and meta-meta-desires are optional, choosable.  Such “perfect” freedom is, of course, an incoherent idea, and if Harris is arguing against it, he is not finding a “deep” problem with compatibilism but a shallow problem with his incompatibilist vision of free will; he has taken on a straw man, and the straw man is beating him.

(C) Where is the freedom in being perfectly satisfied with your thoughts, intentions, and subsequent actions when they are the product of prior events that you had absolutely no hand in creating?

Not only has he not shown that you had absolutely no hand in creating those prior events, but it is false, as just noted.  Once you stop thinking of free will as a magical metaphysical endowment and start thinking of it as an explicable achievement that individual human beings normally accomplish (very much aided by the societies in which they live), much as they learn to speak and read and write, this rhetorical question falls flat.  Infants don’t have free will; normal adults do. Yes, those of us who have free will are lucky to have free will (we’re lucky to be human beings, we’re lucky to be alive), but our free will is not just a given; it is something we are obliged to protect and nurture, with help from our families and friends and the societies in which we live.

Harris allows himself one more rhetorical question on page 19, and this one he emphatically answers:

(D) Am I free to do that which does not occur to me to do?  Of course not.

Again, really? You’re playing bridge and trying to decide whether or not to win the trick in front of you. You decide to play your ace, winning the trick. Were you free to play a low card instead? It didn’t occur to you (it should have, but you acted rather thoughtlessly, as your partner soon informs you).  Were you free to play your six instead? In some sense.  We wouldn’t play games if there weren’t opportunities in them to make one choice or another.  But, comes the familiar rejoinder, if determinism is true and we rewound the tape of time and put you in exactly the same physical state, you’d ignore the six of clubs again. True, but so what? It does not show that you are not the agent you think you are.  Contrast your competence at this moment with the “competence” of a robotic bridge-playing doll that always plays its highest card in the suit, no matter what the circumstances.  It wasn’t free to choose the six, because it would play the ace whatever the circumstances were whereas if it occurred to you to play the six, you could do it, depending on the circumstances. Freedom involves the ability to have one’s choices influenced by changes in the world that matter under the circumstances.  Not a perfect ability, but a reliable ability. If you are such a terrible bridge player that you can never see the virtue in ducking a trick, playing less than the highest card in your hand, then your free will at the bridge table is seriously abridged: you are missing the opportunities that make bridge an interesting game. If determinism is true, are these real opportunities?  Yes, as real as an opportunity could be: thanks to your perceptual apparatus, your memory, and the well-lit environment, you are caused/determined to evaluate the situation as one that calls for playing the six, and you play the six.

Turn to page 20 and get one more rhetorical question:

(E) And there is no way I can influence my desires—for what tools of influence would I use?  Other desires?

Yes, for starters.  Once again, Harris is ignoring a large and distinguished literature that defends this claim.  We use the same tools to influence our own desires as we use to influence other people’s desires. I doubt that he denying that we ever influence other people’s desires.  His book is apparently an attempt to influence the beliefs and desires of his readers, and it seems to have worked rather better than I would like. His book also seems to have influenced his own beliefs and desires: writing it has blinded him to alternatives that he really ought to have considered.  So his obliviousness is something for which he himself is partly responsible, having labored to create a mindset that sees compatibilism as deliberately obtuse.

When Harris turns to a consideration of my brand of compatibilism, he quotes at length from a nice summary of it by Tom Clark, notes that I have approved of that summary, and then says that it perfectly articulates the difference between my view and his own. And this is his rebuttal:

As I have said, I think compatibilists like Dennett change the subject:  They trade a psychological fact—the subjective experience of being a conscious agent—for a conceptual understanding of ourselves as persons. This is a bait and switch. The psychological truth is that people feel identical to a certain channel of information in their conscious minds.  Dennett is simply asserting that we are more than this—we are coterminous with everything that goes on inside our bodies, whether we are conscious of it or not.  This is like saying we are made of stardust—which we are. But we don’t feel like stardust. And the knowledge that we are stardust is not driving our moral intuitions or our system of criminal justice. (p. 23)

I have thought long and hard about this passage, and I am still not sure I understand it, since it seems to be at war with itself. Harris apparently thinks you see yourself as a conscious witness, perhaps immaterial—an immortal soul, perhaps—that is distinct from (the rest of?) your brain. He seems to be saying that this folk understanding people have of what they are identical to must be taken as a “psychological fact” that anchors any discussion of free will.  And then he notes that I claim that this folk understanding is just plain wrong and try to replace it with a more scientifically sound version of what a conscious person is.  Why is it “bait and switch” if I claim to improve on the folk version of personhood before showing how it allows for free will?  He can’t have it both ways. He is certainly claiming in his book that the dualism that is uncritically endorsed by many, maybe most, people is incoherent, and he is right—I’ve argued the same for decades.  But then how can he object that I want to replace the folk conception of free will based on that nonsense with a better one?  The fact that the folk don’t feel as if they are larger than their imagined Cartesian souls doesn’t count against my account, since I am proposing to correct the mistake manifest in that “psychological fact” (if it is one). And if Harris thinks that it is this folk notion of free will that “drives our moral intuitions and our legal system” he should tackle the large literature that says otherwise. (starting with, e.g., Stephen Morse[6]).

One more rhetorical question:

(G) How can we be ‘free’ as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend and of which we are entirely unaware? We can’t. (p. 25-26)

Let’s take this apart, separating its elements. First let’s try dropping the last clause:  “of which we are entirely unaware”.

How can we be ‘free’ as conscious agents if everything that we consciously intend is caused by events in our brain that we do not intend?

Well, if the events that cause your intentions are thoughts about what the best course of action probably is, and why it is the right thing to do, then that causation strikes me as the very epitome of freedom: you have the ability to intend exactly what you think to be the best course of action.  When folks lack that ability, when they find they are unable to act intentionally on the courses of action they deem best, all things considered, we say they suffer from weakness of will. An intention that was an apparently causeless orphan, arising for no discernible reason, would hardly be seen as free; it would be viewed as a horrible interloper, as in alien hand syndrome, imposed on the agent from who knows where.

Now let’s examine the other half of Harris’s question:

How can we be “free” as conscious agents if everything that we consciously intend is caused by events in our brain of which we are entirely unaware?

I don’t always have to reflect, consciously, on my reasons for my intentions for them to be both mine and free. When I say “thank you” to somebody who gives me something, it is “force of habit” and I am entirely unaware of the events in my brain that cause me to say it but it is nonetheless a good example of a free action. Had I had a reason to override the habit, I would have overridden it. My not doing so tacitly endorses it as an action of mine.  Most of the intentions we frame are like this, to one degree or another: we “instinctively” reach out and pull the pedestrian to safety without time for thinking; we rashly adopt a sarcastic tone when replying to the police officer, we hear the doorbell and jump up to see who’s there.  These are all voluntary actions for which we are normally held responsible if anything hinges on them.  Harris notes that the voluntary/involuntary distinction is a valuable one, but doesn’t consider that it might be part of the foundation of our moral and legal understanding of free will. Why not? Because he is so intent on bashing a caricature doctrine.

He ends his chapter on compatibilism with this:

People feel that they are the authors of their thoughts and actions, and this is the only reason why there seems to be a problem of free will worth talking about. (p. 26)

I can agree with this, if I am allowed to make a small insertion:

People feel that they are the authors of their thoughts and actions, and interpreted uncharitably, their view can be made to appear absurd; taken the best way, however, they can be right; and this is the only reason why there seems to be a problem of free will worth talking about.

One more puzzling assertion:

Thoughts like “What should I get my daughter for her birthday? I know—I’ll take her to a pet store and have her pick out some tropical fish” convey the apparent reality of choices, freely made.  But from a deeper perspective (speaking both objectively and subjectively) thoughts simply arise unauthored and yet author our actions. (p. 53)

What would an authored thought look like, pray tell? And how can unauthored thoughts author our actions? Does Harris mean cause, shape and control our actions? But if an unauthored thought can cause, shape and control something, why can’t a whole person cause, shape and control something?  Probably this was misspeaking on Harris’s part. He should have said that unauthored thoughts are the causes, shapers and controllers—but not the authors—of our actions. Nothing could be an author, not really.  But here again Harris is taking an everyday, folk notion of authorship and inflating it into metaphysical nonsense. If he can be the author of his book, then he can be the author of his thoughts.  If he is not the author of Free Will, he should take his name off the cover, shouldn’t he? But he goes on immediately to say he is the cause of his book, and “If I had not decided to write this book, it wouldn’t have written itself.”

Decisions, intentions, efforts, goals, willpower, etc., are causal states of the brain, leading to specific behaviors, and behaviors lead to outcomes in the world. Human choice, therefore, is as important as fanciers of free will believe. But the next choice you make will come out of the darkness of prior causes that you, the conscious witness of your experience, did not bring into being. (p. 34)

We’ve already seen that the last sentence is false. But notice that if it were true, then it would be hard to see why “human choice is important”—except in the way lightning bolts are important (they can do a lot of damage).  If your choices “come out of the darkness” and you did not bring them into being, then they are like the involuntary effusions of sufferers from Tourette’s Syndrome, who blurt out obscenities and make gestures that are as baffling to them as to others.  In fact we know very well that I can influence your choices, and you can influence my choices, and even your own choices, and that this “bringing into being” of different choices is what makes them morally important. That’s why we exhort and chastise and instruct and praise and encourage and inform others and ourselves.

Harris draws our attention to how hard it can be to change our bad habits, in spite of reading self-help books and many self-admonitions. These experiences, he notes, “are not even slightly suggestive of freedom of the will” (p. 35). True, but then other experiences we have are often very suggestive of free will. I make a promise, I solemnly resolve to keep it, and happily, I do!  I hate grading essays, but recognizing that my grades are due tomorrow, I reluctantly sit down and grind through them.  I decide to drive to Boston and lo and behold, the next thing I know I’m behind the wheel of my car driving to Boston!  If I could almost never do such things I would indeed doubt my own free will, and toy with the sad conclusion that somewhere along the way I had become a helpless victim of my lazy habits and no longer had free will.  Entirely missing from Harris’s account—and it is not a lacuna that can be repaired—is any acknowledgment of the morally important difference between normal people (like you and me and Harris, in all likelihood) and people with serious deficiencies in self-control.  The reason he can’t include this missing element is that his whole case depends in the end on insisting that there really is no morally relevant difference between the raving psychopath and us. We have no more free will than he does. Well, we have more something than he does, and it is morally important. And it looks very much like what everyday folks often call free will.

Of course you can create a framework in which certain decisions are more likely than others—you can, for instance, purge your house of all sweets, making it very unlikely that you will eat dessert later in the evening—but you cannot know why you were able to submit to such a framework today when you weren’t yesterday. (p. 38)

Here he seems at first to be acknowledging the very thing I said was missing in his account above—the fact that you can take steps to bring about an alteration in your circumstances that makes a difference to your subsequent choices. But notice that his concession is short-lived, because he insists that you are just as in the dark about how your decision to purge your house of all sweets came about. But that is, or may well be, false. You may know exactly what train of thought led you to that policy. But then, you can’t know why that train of thought occurred to you, and moved you then. No, you can, and often do. Maybe your candy-banishing is the nthlevel result of your deciding to decide to decide to decide to decide . . . . to do something about your health.  But since the regress is infinite, you can’t be responsible!  Nonsense. You can’t be “ultimately responsible” (as Galen Strawson has argued) but so what? You can be partially, largely responsible.

I cannot resist ending this catalogue of mistakes with the one that I find most glaring: the cover of Harris’s little book, which shows marionette strings hanging down.  The point, which he reiterates several times in the book, is that the prior causes (going back to the Big Bang, if you like) that determine your choices are like the puppeteer who determines the puppet’s every action, every “decision.”  This analogy enables him to get off a zinger:

Compatibilism amounts to nothing more than an assertion of the following creed: A puppet is free as long as he loves his strings. (p. 20)

This is in no way supported by anything in his discussion of compatibilism. Somehow Harris has missed one of the deepest points made by Von Neumann and Morgenstern in their introduction to their ground-breaking 1953 book, Theory of Games and Economic Behavior,  [Princeton UP, John and Oskar].  Whereas Robinson Crusoe alone on his desert island can get by with probabilities and expected utility theory, as soon as there is a second agent to deal with, he needs to worry about feedback, secrecy and the intentions of the other agent or agents (what I have called intentional systems). For this he needs game theory. There is a fundamental difference between an environment with no competing agents and an environment populated with would-be manipulators.[7] The manifold of causes that determine our choices only intermittently includes other agents, and when they are around they do indeed represent a challenge to our free will, since they may well try to read our minds and covertly influence our beliefs, but the environment in general is not such an agent, and hence is no puppeteer.  When sunlight bouncing off a ripe apple causes me to decide to reach up and pick it off the tree, I am not being controlled by that master puppeteer, Captain Worldaroundme.  I am controlling myself, thanks to the information I garner from the world around me.  Please, Sam, don’t feed the bugbears. (Dennett, 1984)

Harris half recognizes this when later in the book he raises puppets one more time:

It is one thing to bicker with your wife because you are in a bad mood; it is another to realize that your mood and behavior have been caused by low blood sugar.  This understanding reveals you to be a biochemical puppet, of course, but it also allows you to grab hold of one of your strings.  A bite of food may be all that your personality requires.  Getting behind our conscious thoughts and feelings can allow us to steer a more intelligent course through our lives (while knowing, of course, that we are ultimately being steered). (p. 47)

So unlike the grumpy child (or moody bear), we intelligent human adults can “grab hold of one of our strings”.  But then if our bodies are the puppets and we are the puppeteers, we can control our bodies, and thereby our choices, and hence can be held responsible—really but not Ultimately responsible—for our actions and our characters. We are not immaterial souls but embodied rational agents, determined (in two senses) to do what is right, most of the time, and ready to be held responsible for our deeds.

Harris, like the other scientists who have recently mounted a campaign to convince the world that free will is an illusion, has a laudable motive: to launder the ancient stain of Sin and Guilt out of our culture, and abolish the cruel and all too usual punishments that we zestfully mete out to the Guilty. As they point out, our zealous search for “justice” is often little more than our instinctual yearning for retaliation dressed up to look respectable.  The result, especially in the United States, is a barbaric system of imprisonment—to say nothing of capital punishment—that should make all citizens ashamed.  By all means, let’s join hands and reform the legal system, reduce its excesses and restore a measure of dignity—and freedom!—to those whom the state must punish. But the idea that all punishment is, in the end, unjustifiable and should be abolished because nobody is ever really responsible, because nobody has “real” free will is not only not supported by science or philosophical argument; it is blind to the chilling lessons of the not so distant past. Do we want to medicalize all violators of the laws, giving them indefinitely large amounts of involuntary “therapy” in “asylums” (the poor dears, they aren’t responsible, but for the good of the society we have to institutionalize them)?  I hope not.  But then we need to recognize the powerful (consequentialist)[8] arguments for maintaining a system of punishment (and reward).  Punishment can be fair, punishment can be justified, and in fact, our societies could not manage without it.

This discussion of punishment versus medicalization may seem irrelevant to Harris’s book, and an unfair criticism, since he himself barely alludes to it, and offers no analysis of its possible justification, but that is a problem for him. He blandly concedes we will—and should—go on holding some people responsible but then neglects to say what that involves. Punishment and reward?  If not, what does he mean? If so, how does he propose to regulate and justify it? I submit that if he had attempted to address these questions he would have ended up with something like this:

Those eligible for punishment and reward are those with the general abilities to respond to reasons (warnings, threats, promises) rationally.  Real differences in these abilities are empirically discernible, explicable, and morally relevant.  Such abilities can arise and persist in a deterministic world, and they are the basis for a justifiable policy of reward and punishment, which brings society many benefits—indeed makes society possible.  (Those who lack one or another of the abilities that constitute this moral competence are often said, by everyday folk, to lack free will, and this fact is the heart for compatibilism.)

If you think that the fact that incompatibilist free will is an illusion demonstrates that no punishment can ever be truly deserved, think again. It may help to consider all these issues in the context of a simpler phenomenon: sports. In basketball there is the distinction between ordinary fouls and flagrant fouls, and in soccer there is the distinction between yellow cards and red cards, to list just two examples. Are these distinctions fair?  Justified?  Should Harris be encouraged to argue that there is no real difference between the dirty player and the rest (and besides, the dirty player isn’t responsible for being a dirty player; just look at his upbringing!)?  Everybody who plays games must recognize that games without strictly enforced rules are not worth playing, and the rules that work best do not make allowances for differences in heritage, training, or innate skill.  So it is in society generally: we are all considered equal under the law, presumed to be responsible until and unless we prove to have some definite defect or infirmity that robs us of our free will, as ordinarily understood.


NOTES

  1. The random swerve or clinamen is an idea going back to Lucretius more than two thousand years ago, and has been seductive ever since.
  2. Eddy Nahmias , Stephen Morris , Thomas Nadelhoffer & Jason Turner, 2005, “Surveying Freedom: Folk Intuitions about free will and moral responsibility,” Philosophical Psychology, 18, pp 561-584
  3. Given the ocean of evidence that people assess human abilities, including their abilities to do or choose otherwise, by methods that make no attempt to clamp conditions “precisely as they were,” overlooking this prospect has required nearly superhuman self-blinkering by incompatibilists. I consider Austin’s mistake to be the central core of the ongoing confusion about free will; if you look at the large and intricate philosophical literature about incompatibilism, you will see that just about everyone assumes, without argument, that it is not a mistake.  Without that assumption the interminable discussions of van Inwagen’s “Consequence Argument” could not be formulated, for instance. The excellent article on “Arguments for Incompatibilism” in the online Stanford Encyclopedia of Philosophy, cites Austin’s essay but does not discuss this question.
  4. Here more than anywhere else we can be grateful to Harris for his forthrightness, since the distinguished scientists who declare that free will is an illusion almost never have much if anything to say about how they think people should treat each other in the wake of their discovery. If they did, they would land in the difficulties Harris encounters. If nobody is responsible, not really, then not only should the prisons be emptied, but no contract is valid, mortgages should be abolished, and we can never hold anybody to account for anything they do.  Preserving “law and order” without a concept of real responsibility is a daunting task. Harris at least recognizes his—dare I say?—responsibility to deal with this challenge.
  5. “I’m writing a book on magic,” I explain, and I’m asked, “Real magic?” By real magic people mean miracles, thaumaturgical, and supernatural powers.  “No,” I answer: “Conjuring tricks, not real magic.” Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.  (p. 425) – Lee Siegel, Net of Magic
  6. Morse, “The Non-Problem of Free Will in Forensic Psychiatry and Psychology,” Behavioral Sciences and the Law, Vol. 25 (2007), pp. 203-220; Morse, “Determinism and the Death of Folk Psychology: Two Challenges to Responsibility from Neuroscience,” Minnesota Journal of Law, Science, and Technology, Vol. 9 (2008), pp. 1-36, at pp. 3-13.
  7. 2.2.2. Crusoe is given certain physical data (wants and commodities) and his task is to combine and apply them in such a fashion as to obtain a maximum resulting satisfaction.  There can be no doubt that he controls exclusively all the variables upon which this result depends—say the allotting of resources, the determination of the uses of the same commodity for different wants, etc.  Thus Crusoe faces an ordinary maximum problem, the difficulties of which are of a purely technical—and not conceptual—nature, as pointed out. 2.2.3. Consider now a participant in a social exchange economy. His problem has, of course, many elements in common with a maximum problem.  But it also contains some, very essential, elements of an entirely different nature.  He too tries to obtain an optimum result.  But in order to achieve this, he must enter into relations of exchange with others.  If two or more persons exchange goods with each other, then the result for each one will depend in general not merely upon his own actions but on those of the others as well.  Thus each participant attempts to maximize a function (his above-mentioned “result”) of which he does not control all variables. This is certainly no maximum problem, but a peculiar and disconcerting mixture of several different maximum problems.  Every participant is guided by another principle and neither determines all variables which affect his interest. This kind of problem is nowhere dealt with in classical mathematics. (Von Neumann and Morgenstern, pp10-11)
  8. Apparently some thinkers have the idea that any justification of punishment is (by definition?) retributive.  But this is a mistake; there are consequentialist justifications of the “retributive” ideas of just deserts and the mens rea requirement for guilt, for instance.  Consider how one can defend the existence of the red card/yellow card distinction in soccer on purely consequentialist grounds.
Author: "--" Tags: "Free Will, Publishing, Neuroscience, Eth..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 14 Jan 2014 15:28

Katinka

(Photo via Katinka Matson)


From Edge.org:

Science advances by discovering new things and developing new ideas. Few truly new ideas are developed without abandoning old ones first. As theoretical physicist Max Planck (1858-1947) noted, “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.” In other words, science advances by a series of funerals. Why wait that long?

WHAT SCIENTIFIC IDEA IS READY FOR RETIREMENT?


Ideas change, and the times we live in change. Perhaps the biggest change today is the rate of change. What established scientific idea is ready to be moved aside so that science can advance?


*  *  *



Our Narrow Definition of “Science”

Search your mind, or pay attention to the conversations you have with other people, and you will discover that there are no real boundaries between science and philosophy—or between those disciplines and any other that attempts to make valid claims about the world on the basis of evidence and logic. When such claims and their methods of verification admit of experiment and/or mathematical description, we tend to say that our concerns are “scientific”; when they relate to matters more abstract, or to the consistency of our thinking itself, we often say that we are being “philosophical”; when we merely want to know how people behaved in the past, we dub our interests “historical” or “journalistic”; and when a person’s commitment to evidence and logic grows dangerously thin or simply snaps under the burden of fear, wishful thinking, tribalism, or ecstasy, we recognize that he is being “religious.”

The boundaries between true intellectual disciplines are currently enforced by little more than university budgets and architecture. Is the Shroud of Turin a medieval forgery? This is a question of history, of course, and of archaeology, but the techniques of radiocarbon dating make it a question of chemistry and physics as well. The real distinction we should care about—the observation of which is the sine qua non of the scientific attitude—is between demanding good reasons for what one believes and being satisfied with bad ones.

The scientific attitude can handle whatever happens to be the case. Indeed, if the evidence for the inerrancy of the Bible and the resurrection of Jesus Christ were good, one could embrace the doctrine of fundamentalist Christianity scientifically. The problem, of course, is that the evidence is either terrible or nonexistent—hence the partition we have erected (in practice, never in principle) between science and religion.

Confusion on this point has spawned many strange ideas about the nature of human knowledge and the limits of “science.” People who fear the encroachment of the scientific attitude—especially those who insist upon the dignity of believing in one or another Iron Age god—will often make derogatory use of words such as materialism, neo-Darwinism, and reductionism, as if those doctrines had some necessary connection to science itself.

There are, of course, good reasons for scientists to be materialist, neo-Darwinian, and reductionist. However, science entails none of those commitments, nor do they entail one another. If there were evidence for dualism (immaterial souls, reincarnation), one could be a scientist without being a materialist. As it happens, the evidence here is extraordinarily thin, so virtually all scientists are materialists of some sort. If there were evidence against evolution by natural selection, one could be a scientific materialist without being a neo-Darwinist. But as it happens, the general framework put forward by Darwin is as well established as any other in science. If there were evidence that complex systems produced phenomena that cannot be understood in terms of their constituent parts, it would be possible to be a neo-Darwinist without being a reductionist. For all practical purposes, that is where most scientists find themselves, because every branch of science beyond physics must resort to concepts that cannot be understood merely in terms of particles and fields. Many of us have had “philosophical” debates about what to make of this explanatory impasse. Does the fact that we cannot predict the behavior of chickens or fledgling democracies on the basis of quantum mechanics mean that those higher-level phenomena are something other than their underlying physics? I would vote “no” here, but that doesn’t mean I envision a time when we will use only the nouns and verbs of physics to describe the world. 

But even if one thinks that the human mind is entirely the product of physics, the reality of consciousness becomes no less wondrous, and the difference between happiness and suffering no less important. Nor does such a view suggest that we will ever find the emergence of mind from matter fully intelligible; consciousness may always seem like a miracle. In philosophical circles, this is known as “the hard problem of consciousness”—some of us agree that this problem exists, some of us don’t. Should consciousness prove conceptually irreducible, remaining the mysterious ground for all we can conceivably experience or value, the rest of the scientific worldview would remain perfectly intact.

The remedy for all this confusion is simple: We must abandon the idea that science is distinct from the rest of human rationality. When you are adhering to the highest standards of logic and evidence, you are thinking scientifically. And when you’re not, you’re not. 

*  *  *

Read 170 other responses on Edge.org.

Author: "--" Tags: "Consciousness, Neuroscience, Philosophy,..."
Send by mail Print  Save  Delicious 
Date: Saturday, 14 Dec 2013 18:16

In 2010, John Brockman and the Edge Foundation held a conference entitled “The New Science of Morality.” I attended along with Roy Baumeister, Paul Bloom, Joshua D. Greene, Jonathan Haidt, Marc Hauser, Joshua Knobe, Elizabeth Phelps, and David Pizarro. Some of our conversations have now been published in a book (along with many interesting essays) entitled Thinking: The New Science of Decision-Making, Problem-Solving, and Prediction

John Brockman and Harper Collins have given me permission to reprint my edited remarks here.

*  *  *

What I intended to say today has been pushed around a little bit by what has already been said and by a couple of sidebar conversations. That is as it should be, no doubt. But if my remarks are less linear than you would hope, blame that—and the jet lag.

I think we should differentiate three projects that seem to me to be easily conflated, but which are distinct and independently worthy endeavors:

The first project is to understand what people do in the name of “morality.” We can look at the world, witnessing all of the diverse behaviors, rules, cultural artifacts, and morally salient emotions like empathy and disgust, and we can study how these things play out in human communities, both in our time and throughout history. We can examine all these phenomena in as nonjudgmental a way as possible and seek to understand them. We can understand them in evolutionary terms, and we can understand them in psychological and neurobiological terms, as they arise in the present. And we can call the resulting data and the entire effort a “science of morality.” This would be a purely descriptive science of the sort that I hear Jonathan Haidt advocating.

For most scientists, this project seems to exhaust all that legitimate points of contact between science and morality—that is, between science and judgments of good and evil and right and wrong. But I think there are two other projects that we could concern ourselves with, which are arguably more important.

The second project would be to actually get clearer about what we mean, and should mean, by the term “morality,” understanding how it relates to human well-being altogether, and to use this new discipline to think more intelligently about how to maximize human well-being. Of course, philosophers may think that this begs some of the important questions, and I’ll get back to that. But I think this is a distinct project, and it’s not purely descriptive. It’s a normative project. The question is, how can we think about moral truth in the context of science?

The third project is a project of persuasion: How can we persuade all of the people who are committed to silly and harmful things in the name of “morality” to change their commitments and to lead better lives? I think that this third project is actually the most important project facing humanity at this point in time. It subsumes everything else we could care about—from arresting climate change, to stopping nuclear proliferation, to curing cancer, to saving the whales. Any effort that requires that we collectively get our priorities straight and marshal our time and resources would fall within the scope of this project. To build a viable global civilization we must begin to converge on the same economic, political, and environmental goals.

Obviously the project of moral persuasion is very difficult—but it strikes me as especially difficult if you can’t figure out in what sense anyone could ever be right and wrong about questions of morality or about questions of human values. Understanding right and wrong in universal terms is Project Two, and that’s what I’m focused on.

There are impediments to thinking about Project Two: the main one being that most right-thinking, well-educated, and well-intentioned people—certainly most scientists and public intellectuals, and I would guess, most journalists—have been convinced that something in the last 200 years of intellectual progress has made it impossible to actually speak about “moral truth.” Not because human experience is so difficult to study or the brain too complex, but because there is thought to be no intellectual basis from which to say that anyone is ever right or wrong about questions of good and evil.

My aim is to undermine this assumption, which is now the received opinion in science and philosophy. I think it is based on several fallacies and double standards and, frankly, on some bad philosophy. The first thing I should point out is that, apart from being untrue, this view has consequences.

In 1947, when the United Nations was attempting to formulate a universal declaration of human rights, the American Anthropological Association stepped forward and said that it couldn’t be done—for this would be to merely foist one provincial notion of human rights on the rest of humanity. Any notion of human rights is the product of culture, and declaring a universal conception of human rights is an intellectually illegitimate thing to do. This was the best our social sciences could do with the crematory of Auschwitz still smoking.

But, of course, it has long been obvious that we need to converge, as a global civilization, in our beliefs about how we should treat one another. For this, we need some universal conception of right and wrong. So in addition to just not being true, I think skepticism about moral truth actually has consequences that we really should worry about.

Definitions matter. And in science we are always in the business of framing conversations and making definitions. There is nothing about this process that condemns us to epistemological relativism or that nullifies truth claims. We define “physics” as, loosely speaking, our best effort to understand the behavior of matter and energy in the universe. The discipline is defined with respect to the goal of understanding how matter behaves.

Of course, anyone is free to define “physics” in some other way. A Creationist physicist could come into this room and say, “Well, that’s not my definition of physics. My physics is designed to match the Book of Genesis.” But we are free to respond to such a person by saying, “You know, you really don’t belong at this conference. That’s not ‘physics’ as we are interested in it. You’re using the word differently. You’re not playing our language game.” Such a gesture of exclusion is both legitimate and necessary. The fact that the discourse of physics is not sufficient to silence such a person, the fact that he cannot be brought into our conversation and subdued on our terms, does not undermine physics as a domain of objective truth.

And yet, on the subject of morality, we seem to think that the possibility of differing opinions is a deal breaker. The fact that someone can come forward and say that his morality has nothing to do with human flourishing—that it depends upon following shariah law, for instance—the very fact that such a position can be articulated proves that there’s no such thing as moral truth. Morality, therefore, must be a human invention. But this is a fallacy.

We have an intuitive physics, but much of our intuitive physics is wrong with respect to the goal of understanding how matter and energy behave in this universe. I am saying that we also have an intuitive morality, and much of our intuitive morality may be wrong with respect to the goal of maximizing human flourishing—and with reference to the facts that govern the well-being of conscious creatures, generally.

So I will argue, briefly, that the only sphere of legitimate moral concern is the well-being of conscious creatures. I’ll say a few words in defense of this assertion, but I think the idea that it has to be defended is the product of several fallacies and double standards that we’re not noticing. I don’t know that I will have time to expose all of them, but I’ll mention a few.

Thus far, I’ve introduced two things: the concept of consciousness and the concept of well-being. I am claiming that consciousness is the only context in which we can talk about morality and human values. Why is consciousness not an arbitrary starting point? Well, what’s the alternative? Just imagine someone coming forward claiming to have some other source of value that has nothing to do with the actual or potential experience of conscious beings. Whatever this is, it must be something that cannot affect the experience of anything in the universe, in this life or in any other.

If you put this imagined source of value in a box, I think what you would have in that box would be—by definition—the least interesting thing in the universe. It would be—again, by definition—something that cannot be cared about. Any other source of value will have some relationship to the experience of conscious beings. So I don’t think consciousness is an arbitrary starting point. When we’re talking about right and wrong, and good and evil, and about outcomes that matter, we are necessarily talking about actual or potential changes in conscious experience.

I would further add that the concept of “well-being” captures everything we can care about in the moral sphere. The challenge is to have a definition of well-being that is truly open-ended and can absorb everything we care about. This is why I tend not to call myself a “consequentialist” or a “utilitarian,” because traditionally, these positions have bounded the notion of consequences in such a way as to make them seem very brittle and exclusive of other concerns—producing a kind of body count calculus that only someone with Asperger’s could adopt.

Consider the Trolley Problem: If there just is, in fact, a difference between pushing a person onto the tracks and flipping a switch—perhaps in terms of the emotional consequences of performing these actions—well, then this difference has to be taken into account. Or consider Peter Singer’s Shallow Pond problem: We all know that it would take a very different kind of person to walk past a child drowning in a shallow pond, out of concern for getting his suit wet, than it takes to ignore an appeal from UNICEF. It says much more about you if you can walk past that pond. If we were all this sort of person, there would be terrible ramifications as far as the eye can see. It seems to me, therefore, that the challenge is to get clear about what the actual consequences of an action are, about what changes in human experience are possible, and about which changes matter.

In thinking about a universal framework for morality, I now think in terms of what I call a “moral landscape.” Perhaps there is a place in hell for anyone who would repurpose a cliché in this way, but the phrase, “the moral landscape” actually captures what I’m after: I’m envisioning a space of peaks and valleys, where the peaks correspond to the heights of flourishing possible for any conscious system, and the valleys correspond to the deepest depths of misery.

To speak specifically of human beings for the moment: any change that can affect a change in human consciousness would lead to a translation across the moral landscape. So changes to our genome, and changes to our economic systems—and changes occurring on any level in between that can affect human well-being for good or for ill—would translate into movements within this space of possible human experience.

A few interesting things drop out of this model: Clearly, it is possible, or even likely, that there are many peaks on the moral landscape. To speak specifically of human communities: perhaps there is a way to maximize human flourishing in which we follow Peter Singer as far as we can go, and somehow train ourselves to be truly dispassionate to friends and family, without weighting our children’s welfare more than the welfare of other children, and perhaps there’s another peak where we remain biased toward our own children, within certain limits, while correcting for this bias by creating a social system which is, in fact, fair. Perhaps there are a thousand different ways to tune the variable of selfishness versus altruism, to land us on a peak on the moral landscape.

However, there will be many more ways to not be on a peak. And it is clearly possible to be wrong about how to move from our present position to the nearest available peak. This follows directly from the observation that whatever conscious experiences are possible for us are a product of the way the universe is. Our conscious experience arises out of the laws of nature, the states of our brain, and our entanglement with the world. Therefore, there are right and wrong answers to the question of how to maximize human flourishing in any moment.

This becomes incredibly easy to see when we imagine there being only two people on earth: we can call them Adam and Eve. Ask yourself, are there right and wrong answers to the question of how Adam and Eve might maximize their well-being? Clearly there are. Wrong answer number one: they can smash each other in the face with a large rock. This will not be the best strategy to maximize their well-being.

Of course, there are zero sum games they could play. And yes, they could be psychopaths who might utterly fail to collaborate. But, clearly, the best responses to their circumstance will not be zero-sum. The prospects of their flourishing and finding deeper and more durable sources of satisfaction will only be exposed by some form of cooperation. And all the worries that people normally bring to these discussions—like deontological principles or a Rawlsian concern about fairness—can be considered in the context of our asking how Adam and Eve can navigate the space of possible experiences so as to find a genuine peak of human flourishing, regardless of whether it is the only peak. Once again, multiple, equivalent but incompatible peaks still allow for a realistic space in which there are right and wrong answers to moral questions.

One thing we must not get confused about is the difference between answers in practice and answers in principle. Needless to say, fully understanding the possible range of experiences available to Adam and Eve represents a fantastically complicated problem. And it gets more complicated when we add another 7 billion people to the experiment. But I would argue that it’s not a different problem; it just gets more complicated.

By analogy, consider economics: Is economics a science yet? Apparently not, judging from the last few years. Maybe economics will never get better than it is now. Perhaps we’ll be surprised every decade or so by something terrible, and we’ll be forced to concede that we’re blinded by the complexity of our situation. But to say that it is difficult or impossible to answer certain problems in practice does not even slightly suggest that there are no right and wrong answers to these problems in principle.

The complexity of economics would never tempt us to say that there are no right and wrong ways to design economic systems, or to respond to financial crises. Nobody will ever say that it’s a form of bigotry to criticize another country’s response to a banking failure. Just imagine how terrifying it would be if the smartest people around all more or less agreed that we had to be nonjudgmental about everyone’s view of economics and about every possible response to a global economic crisis.

And yet that is exactly where we stand as an intellectual community on the most important questions in human life. I don’t think you have enjoyed the life of the mind until you have witnessed a philosopher or scientist talking about the “contextual legitimacy” of the burka, or of female genital excision, or any of these other barbaric practices that we know cause needless human misery. We have convinced ourselves that somehow science is by definition a value-free space and that we can’t make value judgments about beliefs and practices that needlessly derail our attempts to build happy and sane societies.

The truth is, science is not value-free. Good science is the product of our valuing evidence, logical consistency, parsimony, and other intellectual virtues. And if you don’t value those things, you can’t participate in the scientific conversation. I’m saying we need not worry about the people who don’t value human flourishing, or who say they don’t. We need not listen to people who come to the table saying, “You know, we want to cut the heads off adulterers at half-time at our soccer games because we have a book dictated by the Creator of the universe which says we should.” In response, we are free to say, “Well, you appear to be confused about everything. Your “physics” isn’t physics, and your “morality” isn’t morality.” These are equivalent moves, intellectually speaking. They are borne of the same entanglement with real facts about the way the universe is. In terms of morality, our conversation can proceed with reference to facts about the changing experiences of conscious creatures. It seems to me to be just as legitimate, scientifically, to define “morality” in this way as it is to define “physics” in terms of the behavior of matter and energy. But most people engaged in of the scientific study of morality don’t seem to realize this.

From the publisher: Daniel Kahneman on the power (and pitfalls) of human intuition and “unconscious” thinking • Daniel Gilbert on desire, prediction, and why getting what we want doesn’t always make us happy • Nassim Nicholas Taleb on the limitations of statistics in guiding decision-making • Vilayanur Ramachandran on the scientific underpinnings of human nature • Simon Baron-Cohen on the startling effects of testosterone on the brain • Daniel C. Dennett on decoding the architecture of the “normal” human mind • Sarah-Jayne Blakemore on mental disorders and the crucial developmental phase of adolescence • Jonathan Haidt, Sam Harris, and Roy Baumeister on the science of morality, ethics, and the emerging synthesis of evolutionary and biological thinking • Gerd Gigerenzer on rationality and what informs our choices

 

 

Author: "--" Tags: "Neuroscience, Ethics, Philosophy,"
Send by mail Print  Save  Delicious 
Date: Saturday, 07 Dec 2013 06:56

Author: "--" Tags: "Interviews and Appearances, Video,"
Send by mail Print  Save  Delicious 
Date: Saturday, 30 Nov 2013 03:20

atheists


Peter Boghossian is a full-time faculty member in the philosophy department at Portland State University. He is also a national speaker for the Center for Inquiry, the Secular Student Alliance, and the Richard Dawkins Foundation for Reason and Science.

Peter was kind enough to answer a few questions about his new book, A Manual for Creating Atheists.

 

*  *  *


1. What was your goal in writing A Manual for Creating Atheists?

My primary goal was to give readers the tools to talk people out of faith and into reason.

2. How do you help readers accomplish this?

Almost everyone can relate to having had conversations with friends, family, coworkers, where you are left shaking your head and wondering how in the world they can believe what they believe—conversations where they fully and uniformly dismiss every fact and piece of evidence presented to them. So the core piece of advice I give may at first sound counterintuitive, but it is simple: When speaking with people who hold beliefs based on faith, don’t get into a debate about facts or evidence or even their specific beliefs. Rather, get them to question the manner in which they’ve reached their beliefs—that is, get them to question the value of faith in appraising the world. Once they question the value of faith, all the unevidenced and unreasoned beliefs will inevitably collapse on their own. In that sense, the book is really about getting people to think critically—the atheism part is just a by-product.  So my hope is that people won’t just read A Manual for Creating Atheists—they’ll act on it and put it to use. It’s a tool, and like any tool, it does no good unless it’s used. 

The book draws from multiple domains of study—philosophy, psychology, cognitive neuroscience, psychotherapy, history, apologetics, even criminal justice and addiction medicine—and focuses principally on research designed to change the behavior of people who don’t think they have a problem and don’t want their behavior changed. This vast body of peer-reviewed literature forms the basis of the book, but the book also stems in large part from my own decades-long work using and teaching these techniques in prisons, colleges, seminaries, and hospitals, and even on the streets, where I’ve honed and revised them, improved upon what’s worked, and discarded what hasn’t. The result is a book that will get the reader quickly up to speed—through step-by-step guides and conversational templates—on all the academically grounded, street-tested techniques and tools required for talking people out of faith and superstition and into reason.

3. What is the most common logical error religious people make in their arguments for the existence of God?

Confirmation bias—although I think it’s less that they’re using a logical fallacy and more that the entire way they’ve conceptualized the problem is fallacious. In other words, they’ve started with their conclusion and reasoned backward from that conclusion. They’ve started with the idea not only that God exists but that a very specific God exists—and they’ve asked themselves how they know this is true. They’ve put their metaphysics before their epistemology.

4. Perhaps you should spell out what you mean by “epistemology” and why you think it’s important.

“Epistemology” basically means how one knows what one knows. In the context of a faith-based intervention, one can also look at epistemology as a belief-forming mechanism.

A key principle in helping people abandon faith and embrace reason is to focus on how one acquires knowledge. As I said, interventions should not target conclusions that someone holds, or specific beliefs, but the processes used to form beliefs. God, for example, is a conclusion arrived at as the result of a faulty epistemology.

For too long we’ve misidentified the problem. We’ve conceptualized it in terms of conclusions people hold, not mechanisms of belief formation they use. I’m advocating that we reconceptualize the problem of faith, God, and religion (and virtually every other instance of insufficiently evidenced belief) in terms of epistemology—that is, in terms of how people come to know what they think they know. 

5. What do you consider to be the core commitments of a healthy epistemology?

1) An understanding that the way to improve the human condition is through reason, rationality, and science. Consequently, the words “reason” and “hope” would be forever wedded, as would the words “faith” and “despair.”
2) The willingness to revise one’s beliefs.
3) Saying “I don’t know” when one doesn’t know.

6. Do you think that the forces of reason are winning?

Yes. I think they are winning, and I think they will prevail.

Up to now, most atheists have simply criticized religion in various ways, but the point is to dispel it.  In A Manual For Creating Atheists, Peter Boghossian fills that gap, telling the reader how to become a ‘street epistemologist’ with the skills to attack religion at its weakest point: its reliance on faith rather than evidence. This book is essential for nonbelievers who want to do more than just carp about religion, but want to weaken its odious grasp on the world.
Jerry Coyne, author of Why Evolution is True

Dr. Peter Boghossian’s A Manual for Creating Atheists is a precise, passionate, compassionate and brilliantly reasoned work that will illuminate any and all minds capable of openness and curiosity. This is not a bedtime story to help you fall asleep, but a wakeup call that has the best chance of bringing your rational mind back to life.
—Stefan Molyneux, host of Freedomain Radio, the largest and most popular philosophy show on the web

A book so great you can skip it and just read the footnotes. Pure genius.
—Christopher Johnson, co-founder, The Onion

atheists

 

 

Author: "--" Tags: "Atheism, Book News, Publishing, Religion..."
Send by mail Print  Save  Delicious 
Date: Monday, 18 Nov 2013 20:15

deception

(Photo via Shutterstock)


Last Christmas, my friends Mark and Jessica spent the morning opening presents with their daughter, Rachel, who had just turned four. After a few hours of excitement, feelings of holiday lethargy and boredom descended on the family—until Mark suddenly had a brilliant idea for how they could have a lot more fun.

Jessica was reading on the couch while Rachel played with her new dolls on the living room carpet.

“Rachel,” Mark said, “I need to tell you something very important… You can’t keep any of these toys. Mommy and I have decided to give them away to the other kids at your school.”

A look of confusion came over his daughter’s face. Mark caught Jessica’s eye. She recognized his intentions at once and was now struggling to contain her glee. She reached for their new video camera.

“You’ve had these toys long enough, don’t you think, Sweetie?”

“No, Daddy! These are my Christmas presents.”

“Not anymore. It’s time to say good-bye…”

Mark began gathering her new toys and putting them in a trash bag.

“No, Daddy!”

“They’re only toys, Rachel. Time to grow up!”

“Not my Polly Pockets! Not my Polly Pockets!”

The look of terror on his daughter’s face was too funny for words. Mark could barely speak. He heard Jessica struggling to stifle a laugh as she stepped around the couch with the camera so that she could capture all the action from the front. Mark knew that if he made eye contact with his wife, he would be lost.

“These Polly Pockets belong to another little girl now… She’s going to love them!”

That did the trick. His daughter couldn’t have produced a louder howl of pain had he smashed her knee with a hammer. Luckily, Jessica caught the moment close-up—her daughter’s hot tears of rage and panic nearly wet the lens.

Mark and Jessica immediately posted the footage of Rachel’s agony to YouTube, where 24 million people have now seen it. This has won them some small measure of fame, which makes them very happy.


No doubt, you will be relieved to learn that Mark, Jessica, and Rachel do not exist. In fact, I am confident that no one I know would treat their child this way. But this leaves me at a loss to explain the popularity of a morally identical stunt engineered for three years running by Jimmy Kimmel:



As you watch the above video and listen to the laughter of Kimmel’s studio audience, do your best to see the world from the perspective of these unhappy children. Admittedly, this can be difficult. Despite my feelings of horror over the whole project, a few of these kids made me laugh as well—some of them are just so adorably resilient in the face of parental injustice.  However, I am convinced that anyone who takes pleasure in all this exploited cuteness is morally confused. Yes, we know that these kids will get their candy back in the end. But the kids themselves don’t know it, and the betrayal they feel is heartbreakingly genuine. This is no way to treat children.

It is true that the tears of a child must often be taken less seriously than those of an adult—because they come so freely. To judge from my daughter’s reaction in the moment, getting vaccinated against tetanus is every bit as bad as getting the disease. All parents correct for this distortion of reality—and should—so that they can raise their kids without worrying at every turn that they are heaping further torments upon the damned.  Nevertheless, I am astonished at the percentage of people who find the Kimmel videos morally unproblematic. When I expressed my concern on Twitter, I received the following defenses of Kimmel and these misguided parents:

• People have to learn how to take a joke.
• It’s only candy. Kids need to realize that it doesn’t matter.
• Kids must be prepared for the real world, and pranks like this help prepare them. Now they know to take what authority figures say with a grain of salt.
• They won’t remember any of this when they are older—so there can’t be any lasting harm.

These responses are callous and crazy. A four-year-old cannot possibly learn that candy “doesn’t matter”—in fact, many adults can’t seem to learn this. But he can learn that his parents will lie to him for the purpose of making him miserable. He can also learn that they will find his suffering hilarious and that, at any moment, he might be shamed by those closest to him. True, he may not remember learning these lessons explicitly—unless he happens to watch the footage on YouTube as it surpasses a billion views—but he will, nevertheless, be a person who was raised by parents who played recklessly with his trust. It amazes me that people think the stakes in these videos are low.

My daughter is nearly five, and I can recall lying to her only once. We were looking for nursery rhymes on the Internet and landed on a page that showed a 16th-century woodcut of a person being decapitated. As I was hurriedly scrolling elsewhere, she demanded to know what we had just seen. I said something silly like “That was an old and very impractical form of surgery.” This left her suitably perplexed, and she remains unaware of man’s inhumanity to man to this day. However, I doubt that even this lie was necessary. I just wasn’t thinking very fast on my feet.

As parents, we must maintain our children’s trust—and the easiest way to lose it is by lying to them. Of course, we should communicate the truth in ways they can handle—and this often demands that we suppress details that would be confusing or needlessly disturbing. An important difference between children and (normal) adults is that children are not fully capable of conceiving of (much less looking out for) their real interests. Consequently, it might be necessary in some situations to pacify or motivate them with a lie. In my experience, however, such circumstances almost never arise.

Many people imagine that it is necessary to lie to children to make them feel good about themselves. But this makes little moral or intellectual sense. Especially with young children, the purpose of praise is to encourage them to try new things and enjoy themselves in the process. It isn’t a matter of evaluating their performance by reference to some external standard. The truth communicated by saying “That’s amazing” or “I love it” in response to a child’s drawing is never difficult to find or feel. Of course, things change when one is talking to an adult who wants to know how his work compares with the work of others. Here, we do our friends no favors by lying to them.

Strangely, the most common question I’ve received from readers on the topic of deception has been some version of the following:

What should we tell our children about Santa? My daughter asked if Santa was real the other day, and I couldn’t bear to disappoint her.

In fact, I’ve heard from several readers who seemed to anticipate this question, and who wrote to tell me how disturbed they had been when they learned that their parents had lied to them every Christmas. I’ve also heard from readers whose parents told the truth about Santa simply because they didn’t want the inevitable unraveling of the Christmas myth to cast any doubt on the divinity of Jesus Christ. I suppose some ironies are harder to detect than others.

I don’t remember whether I ever believed in Santa, but I was never tempted to tell my daughter that he was real. Christmas must be marginally more exciting for children who are duped about Santa—but something similar could be said of many phenomena about which no one is tempted to lie. Why not insist that dragons, mermaids, fairies, and Superman actually exist? Why not present the work of Tolkien and Rowling as history?

The real truth—which everyone knows 364 days of the year—is that fiction can be both meaningful and fun. Children have fantasy lives so rich and combustible that rigging them with lies is like putting a propeller on a rocket. And is the last child in class who still believes in Santa really grateful to have his first lesson in epistemology meted out by his fellow six-year-olds? If you deceive your children about Santa, you may give them a more thrilling experience of Christmas. What you probably won’t give them, however, is the sense that you would not and could not lie to them about anything else.

We live in a culture where the corrosive effect of lying is generally overlooked, and where people remain confused about the difference between truly harmless deceptions—such as the poetic license I took at the beginning of this article—and seemingly tiny lies that damage trust. I’ve written a short book about this. Its purpose is to convey, in less than an hour, one of the most significant ethical lessons I’ve ever learned: If you want to improve yourself and the people around you, you need only stop lying.

Lying book cover sam harris

 

Author: "--" Tags: "Book News, Ethics, Philosophy,"
Send by mail Print  Save  Delicious 
Date: Tuesday, 12 Nov 2013 18:13

Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is a past president of the Society for Philosophy and Psychology and a co-editor of Behavioral and Brain Sciences, one of the major journals in the field. Dr. Bloom has written for scientific journals such as Nature and Science and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic. He is the author or editor of six books, including Just Babies: The Origins of Good and Evil.

Paul was kind enough to answer a few questions about his new book.

*  *  *



Harris: What are the greatest misconceptions people have about the origins of morality?

Bloom: The most common misconception is that morality is a human invention. It’s like agriculture and writing, something that humans invented at some point in history. From this perspective, babies start off as entirely self-interested beings—little psychopaths—and only gradually come to appreciate, through exposure to parents and schools and church and television, moral notions such as the wrongness of harming another person.

Now, this perspective is not entirely wrong. Certainly some morality is learned; this has to be the case because moral ideals differ across societies. Nobody is born with the belief that sexism is wrong (a moral belief that you and I share) or that blasphemy should be punished by death (a moral belief that you and I reject). Such views are the product of culture and society. They aren’t in the genes.

But the argument I make in Just Babies is that there also exist hardwired moral universals—moral principles that we all possess. And even those aspects of morality—such as the evils of sexism—that vary across cultures are ultimately grounded in these moral foundations.

A very different misconception sometimes arises, often stemming from a religious or spiritual outlook. It’s that we start off as Noble Savages, as fundamentally good and moral beings. From this perspective, society and government and culture are corrupting influences, blotting out and overriding our natural and innate kindness.

This, too, is mistaken. We do have a moral core, but it is limited—Hobbes was closer to the truth than Rousseau. Relative to an adult, your typical toddler is selfish, parochial, and bigoted. I like the way Kingsley Amis once put it: “It was no wonder that people were so horrible when they started life as children.” Morality begins with the genes, but it doesn’t end there.

Harris: How do you distinguish between the contributions of biology and those of culture?

Bloom: There is a lot you can learn about the mind from studying the fruit flies of psychological research—college undergraduates. But if you want to disentangle biology and culture, you need to look at other populations. One obvious direction is to study individuals from diverse cultures. If it turns out that some behavior or inclination shows up only in so-called WEIRD (Western Educated Industrial Rich Democratic) societies, it’s unlikely to be a biological adaptation. For instance, a few years ago researchers were captivated by the fact that subjects in the United States and Switzerland are highly altruistic and highly moral when playing economic games. They assumed that this reflects the workings of some sort of evolved module—only to discover that people in the rest of the world behave quite differently, and that their initial findings are better explained as a quirk of certain modern societies.

One can do comparative research—if a human capacity is shared with other apes, then its origin is best explained in terms of biology, not culture. And there’s a lot of fascinating research with apes and monkeys that’s designed to address questions about the origin of pro-social behavior.

Then there’s baby research. We can learn a lot about human nature by looking at individuals before they are exposed to school, television, religious institutions, and the like. The powerful capacities that we and other researchers find in babies are strong evidence for the contribution of biology. Now, even babies have some life history, and it’s possible that very early experience, perhaps even in the womb, plays some role in the origin of these capacities. I’m comfortable with this—my claim in Just Babies isn’t that the moral capacities of babies emerge without any interaction with the environment. That would be nuts. Rather, my claim is the standard nativist one: These moral capacities are not acquired through learning.

We should also keep in mind that failure to find some capacity in a baby does not show that it is the product of culture. For one thing, the capacity might be present in the baby’s mind but psychologists might not be clever enough to detect it. In the immortal words of Donald Rumsfeld, “Absence of evidence is not evidence of absence.” Furthermore, some psychological systems that are pretty plainly biological adaptations might emerge late in development—think about the onset of disgust at roughly the age of four, or the powerful sexual desires that emerge around the time of puberty. Developmental research is a useful tool for pulling apart biology and culture, but it’s not a magic bullet.

Harris: What are the implications of our discovering that many moral norms emerge very early in life?

Bloom: Some people think that once we know what the innate moral system is, we’ll know how to live our lives. For them it’s as if the baby’s mind contains a holy text of moral wisdom, written by Darwin instead of Yahweh, and once we can read it, all ethical problems will be solved.

This seems unlikely. Mature moral decision-making involves complex reasoning, and often the right thing to do involves overriding our gut feelings, including those that are hardwired. And some moral insights, such as the wrongness of slavery, are surely not in our genes.

But I do think that this developmental work has some interesting implications. For one thing, the argument in Just Babies is that, to a great extent, all people have the same morality. The differences that we see—however important they are to our everyday lives—are variations on a theme. This universality provides some reason for optimism. It suggests that if we look hard enough, we can find common ground with any other neurologically normal human, and that has to be good news.

Just Babies is optimistic in another way. The zeitgeist in modern psychology is pro-emotion and anti-reason. Prominent writers and intellectuals such as David Brooks, Malcolm Gladwell, and Jonathan Haidt have championed the view that, as David Hume famously put it, we are slaves of the passions. From this perspective, moral judgments and moral actions are driven mostly by gut feelings—rational thought has little to do with it.

That’s a grim view of human nature. If it were true, we should buck up and learn to live with it. But I argue in Just Babies that it’s not true. It is refuted by everyday experience, by history, and by the science of developmental psychology. Rational deliberation is part of our everyday lives, and, as many have argued—including Steven Pinker, Peter Singer, Joshua Greene, you, and me, in the final chapter of in Just Babies—it is a powerful force in driving moral progress.

Harris: When you talk about moral progress, it implies that some moralities are better than others. Do you think, then, that it is legitimate to say that certain individuals or cultures have the wrong morality?

Bloom: If humans were infinitely plastic, with no universal desires, goals, or moral principles, the answer would have to be no. But it turns out that we have deep commonalities, and so, yes, we can talk meaningfully about some moralities’ being better than others.

Consider a culture in which some minority is kept as slaves—tortured, raped, abused, bought and sold, and so on—and this practice is thought of by the majority as a moral arrangement. Perhaps it’s justified by reference to divine command, or the demands of respected authorities, or long-standing tradition. I think we’re entirely justified in arguing that they are wrong, and when we do this, we’re not merely saying “We like our way better.” Rather, we can argue that it’s wrong by pointing out that it’s wrong even for them—the majority who benefit from the practice.

Obstetricians used to deliver babies without washing their hands, and many mothers and babies died as a result. They were doing it wrong—wrong by their own standards, because obstetricians wanted to deliver babies, not kill them. Similarly, given that the humans in the slave society possess certain values and intuitions and priorities, they are acting immorally by their own lights, and they would appreciate this if they were exposed to certain arguments and certain facts.

Now, this is an empirical claim, drawing on assumptions about human psychology, but it’s supported by history. Good moral ideas can spread through the world in much the same way that good scientific ideas can, and once they are established, people marvel that they could ever have thought differently. Americans are no more likely to reinstate slavery than we are to give up on hand-washing for doctors.

You’ve written extensively on these issues in The Moral Landscape and elsewhere, and since we agree on so much, I can’t resist sounding a note of gentle conflict. Your argument is that morality is about maximizing the well-being of conscious minds. This means that determining the best moral system reduces to the empirical/scientific question of what system best succeeds at this goal. From this standpoint, we can reject a slave society for precisely the same reason we can reject a dirty-handed-obstetrician society—it involves needless human pain.

My view is slightly different. You’re certainly right that maximizing well-being is something we value, and needless suffering is plainly a bad thing. But there remain a lot of hard questions—the sort that show up in Ethics 101 and never go away. Are we aspiring for the maximum total amount of individual well-being or the highest average? Are principles of fairness and equality relevant? What if the slave society has very few unhappy slaves and very many happy slaveholders, so its citizens are, in total and on average, more fulfilled than ours? Is that society more moral? If my child needs an operation to save his sight, am I a better person if I let him go blind and send the money to a charity where it will save another child’s life? These are hard questions, and they don’t go away if we have a complete understanding of the empirical facts.

The source of these difficulties, I think, is that as reflective moral beings, we sometimes have conflicting intuitions as to what counts as morally good. If we were natural-born utilitarians of the Benthamite sort, then determining the best possible moral world really would be a straightforward empirical problem. But we aren’t, and so it isn’t.

Harris: Well, it won’t surprise you to learn that I agree with everything you’ve said up until this last bit. In fact, these last points illustrate why I choose not to follow the traditional lines laid down by academic philosophers. If you declare that you are a “utilitarian,” everyone who has taken Ethics 101, as you say, imagines that he understands the limits of your view. Unfortunately, those limits have been introduced by philosophers themselves and are enshrined in the way that we have been encouraged to talk about moral philosophy.

For instance, you suggest that a concern for well-being might be opposed to a concern for fairness and equality—but fairness and equality are immensely important precisely because they are so good at safeguarding the well-being of people who have competing interests. If someone says that fairness and equality are important for reasons that have nothing to do with the well-being of people, I have no idea what he is talking about.

Similarly, you suggest that the hard questions of ethics wouldn’t go away if we had a complete understanding of empirical facts. But we really must pause to appreciate just how unimaginably different things would be IF we had such an understanding. This kind of omniscience is probably impossible—but nothing in my account depends on its being possible in practice. All we need to establish a strong, scientific conception of moral truth in principle is to admit that there is a landscape of experiences that conscious beings like ourselves can have, both individually and collectively—and that some are better than others (in any and every sense of “better”). Must we really defend the proposition that an experience of effortless good humor, serenity, love, creativity, and awe spread over all possible minds would be better than everyone’s being flayed alive in a dungeon by unhappy devils? I don’t think so.

I agree that how we think about collective well-being presents certain difficulties (average vs. maximum, for instance)—but a strong conception of moral truth requires only that we acknowledge the extremes. It seems to me that the paradoxes that Derek Parfit has engineered here, while ingenious, need no more impede our progress toward increased well-being than the paradoxes of Zeno prevent us from getting to the coffee pot each morning. I admit that it can be difficult to say whether a society of unhappy egalitarians would be better or worse than one composed of happy slaveholders and none-too-miserable slaves. And if we tuned things just right, I would be forced to say that these societies are morally equivalent. However, one thing is not debatable (and it is all that my thesis as presented in The Moral Landscape requires): If you took either of these societies and increased the well-being of everyone, you would be making a change for the good. If, for instance, the slaveholders invented machines that could replace the drudgery of slaves, and the slaves themselves became happy machine owners—and these changes introduced no negative consequences that canceled the moral gains—this would be an improvement in moral terms. And any person who later attempted to destroy the machines and begin enslaving his neighbors would be acting immorally.

Again, the changes in well-being that are possible for creatures like ourselves are possible whether or not anyone knows about them, and their possibility depends in some way on the laws that govern the states of conscious minds in this universe (or any other).

Whatever its roots in our biology, I think we should now view morality as a navigation problem: How can we (or any other conscious system) reduce suffering and increase happiness? There might be an uncountable number of morally equivalent peaks and valleys on the landscape—but that wouldn’t undermine the claim that basking on some peak is better than being tortured in one of the valleys. Nor would it suggest that movement up or down depends on something other than the laws of nature.

Bloom: I agree with almost all of this. Sure—needless suffering is a bad thing, and increased well-being is a good thing, and that’s why I’m comfortable saying that some societies (and some individuals) have better moralities than others. I agree as well that determining the right moral system will rest in part on knowing the facts. This is true for the extremes, and it’s also true for real-world cases. The morality of drug laws in the United States, for instance, surely has a lot to do with whether those laws cause an increase or a decrease in human suffering.

My point was that there are certain moral problems that don’t seem to be solvable by science. You accept this but think that these are like paradoxes of metaphysics—philosophical puzzles with little practical relevance.

This is where we clash, because some of these moral problems keep me up at night. Take the problem of how much I should favor my own children. I spend money to improve my sons’ well-being—buying them books, taking them on vacations, paying dentists to fix their teeth, etc.—that could instead be used to save the lives of children in poor countries. I don’t need a neuroscientist to tell me that I’m not acting to increase the total well-being of conscious individuals. Am I doing wrong? Maybe so. But would you recommend the alternative, where (to use my earlier example) I let my son go blind so that I can send the money I would have paid for the operation to Oxfam so that another child can live? This seems grotesque. So what’s the right balance? How should we weigh the bonds of family, friendship, and community?

This is a serious problem of everyday life, and it’s not going to be solved by science.

Harris:
Actually, I don’t think our views differ much. This just happens to be a place where we need to distinguish between answers in practice and answers in principle. I completely agree that there are important ethical problems that we might never solve. I also agree that there are circumstances in which we tend to act selfishly to a degree that beggars any conceivable philosophical justification. We are, therefore, not as moral as we might be. Is this really a surprise? As you know, the forces that rule us here are largely situational: It is one thing for you to toss an appeal from the Red Cross in the trash on your way to the ice cream store. It would be another for you to step over the prostrate bodies of starving children. You know such children exist, of course, and yet they are out of sight and (generally) out of mind. Few people would counsel you to let your own children go blind, but I can well imagine Peter Singer’s saying that you should deprive them of every luxury as long as other children are deprived of food. To understand the consequences of doing this, we would really need to take all the consequences into account. 

I briefly discuss this problem in The Moral Landscape. I suspect that some degree of bias toward one’s own offspring could be normative in that it will tend to lead to better outcomes for everyone. Communism, many have noticed, appears to run so counter to human nature as to be more or less unworkable. But the crucial point is that we could be wrong about this—and we would be wrong with reference to empirical facts that we may never fully discover. To say that these answers will not be found through science is merely to say that they won’t be established with any degree of certainty or precision. But that is not to say that such answers do not exist. It is also possible to know exactly what we should do but to not be sufficiently motivated to do it. We often find ourselves in this situation in life. For example, a person desperately wants to lose weight and knows that he would be happier if he did. He also knows how to do it—by eating less junk and exercising more. And yet he may spend his whole life not doing what he knows would be good for him. In many respects, I think our morality suffers from this kind of lassitude.

But we can achieve something approaching moral certainty for the easy cases. As you know, many academics and intellectuals deny this. You and I are surrounded by highly educated and otherwise intelligent people who believe that opposition to the burqa is merely a symptom of Western provincialism. I think we agree that this kind of moral relativism rests on some very dubious (and unacknowledged) assumptions about the nature of morality and the limits of science. Let us go out on a scientific limb together: Forcing half the population to live inside cloth bags isn’t the best way to maximize individual or collective well-being. On the surface, this is a rather modest ethical claim. When we look at the details, however, we find that it is really a patchwork of claims about psychology, sociology, economics, and probably several other scientific disciplines. In fact, the moment we admit that we know anything at all about human well-being, we find that we cannot talk about moral truth outside the context of science. Granted, the scientific details may be merely implicit, or may remain perpetually out of reach. But we are talking about the nature of human minds all the same.

Bloom: We still have more to talk about regarding the hard cases, but I agree with you that there are moral truths and that we can learn about them, at least in part, through science. Part of the program of doing so is understanding human nature, and especially our universal moral sense, and this is what my research, and my new book, is all about.

 

Author: "--" Tags: "Announcements, Publishing, Ethics, Philo..."
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader