(Photo via Shutterstock)
Last Christmas, my friends Mark and Jessica spent the morning opening presents with their daughter, Rachel, who had just turned four. After a few hours of excitement, feelings of holiday lethargy and boredom descended on the family—until Mark suddenly had a brilliant idea for how they could have a lot more fun.
Jessica was reading on the couch while Rachel played with her new dolls on the living room carpet.
“Rachel,” Mark said, “I need to tell you something very important… You can’t keep any of these toys. Mommy and I have decided to give them away to the other kids at your school.”
A look of confusion came over his daughter’s face. Mark caught Jessica’s eye. She recognized his intentions at once and was now struggling to contain her glee. She reached for their new video camera.
“You’ve had these toys long enough, don’t you think, Sweetie?”
“No, Daddy! These are my Christmas presents.”
“Not anymore. It’s time to say good-bye…”
Mark began gathering her new toys and putting them in a trash bag.
“They’re only toys, Rachel. Time to grow up!”
“Not my Polly Pockets! Not my Polly Pockets!”
The look of terror on his daughter’s face was too funny for words. Mark could barely speak. He heard Jessica struggling to stifle a laugh as she stepped around the couch with the camera so that she could capture all the action from the front. Mark knew that if he made eye contact with his wife, he would be lost.
“These Polly Pockets belong to another little girl now… She’s going to love them!”
That did the trick. His daughter couldn’t have produced a louder howl of pain had he smashed her knee with a hammer. Luckily, Jessica caught the moment close-up—her daughter’s hot tears of rage and panic nearly wet the lens.
Mark and Jessica immediately posted the footage of Rachel’s agony to YouTube, where 24 million people have now seen it. This has won them some small measure of fame, which makes them very happy.
No doubt, you will be relieved to learn that Mark, Jessica, and Rachel do not exist. In fact, I am confident that no one I know would treat their child this way. But this leaves me at a loss to explain the popularity of a morally identical stunt engineered for three years running by Jimmy Kimmel:
As you watch the above video and listen to the laughter of Kimmel’s studio audience, do your best to see the world from the perspective of these unhappy children. Admittedly, this can be difficult. Despite my feelings of horror over the whole project, a few of these kids made me laugh as well—some of them are just so adorably resilient in the face of parental injustice. However, I am convinced that anyone who takes pleasure in all this exploited cuteness is morally confused. Yes, we know that these kids will get their candy back in the end. But the kids themselves don’t know it, and the betrayal they feel is heartbreakingly genuine. This is no way to treat children.
It is true that the tears of a child must often be taken less seriously than those of an adult—because they come so freely. To judge from my daughter’s reaction in the moment, getting vaccinated against tetanus is every bit as bad as getting the disease. All parents correct for this distortion of reality—and should—so that they can raise their kids without worrying at every turn that they are heaping further torments upon the damned. Nevertheless, I am astonished at the percentage of people who find the Kimmel videos morally unproblematic. When I expressed my concern on Twitter, I received the following defenses of Kimmel and these misguided parents:
• People have to learn how to take a joke.
• It’s only candy. Kids need to realize that it doesn’t matter.
• Kids must be prepared for the real world, and pranks like this help prepare them. Now they know to take what authority figures say with a grain of salt.
• They won’t remember any of this when they are older—so there can’t be any lasting harm.
These responses are callous and crazy. A four-year-old cannot possibly learn that candy “doesn’t matter”—in fact, many adults can’t seem to learn this. But he can learn that his parents will lie to him for the purpose of making him miserable. He can also learn that they will find his suffering hilarious and that, at any moment, he might be shamed by those closest to him. True, he may not remember learning these lessons explicitly—unless he happens to watch the footage on YouTube as it surpasses a billion views—but he will, nevertheless, be a person who was raised by parents who played recklessly with his trust. It amazes me that people think the stakes in these videos are low.
My daughter is nearly five, and I can recall lying to her only once. We were looking for nursery rhymes on the Internet and landed on a page that showed a 16th-century woodcut of a person being decapitated. As I was hurriedly scrolling elsewhere, she demanded to know what we had just seen. I said something silly like “That was an old and very impractical form of surgery.” This left her suitably perplexed, and she remains unaware of man’s inhumanity to man to this day. However, I doubt that even this lie was necessary. I just wasn’t thinking very fast on my feet.
As parents, we must maintain our children’s trust—and the easiest way to lose it is by lying to them. Of course, we should communicate the truth in ways they can handle—and this often demands that we suppress details that would be confusing or needlessly disturbing. An important difference between children and (normal) adults is that children are not fully capable of conceiving of (much less looking out for) their real interests. Consequently, it might be necessary in some situations to pacify or motivate them with a lie. In my experience, however, such circumstances almost never arise.
Many people imagine that it is necessary to lie to children to make them feel good about themselves. But this makes little moral or intellectual sense. Especially with young children, the purpose of praise is to encourage them to try new things and enjoy themselves in the process. It isn’t a matter of evaluating their performance by reference to some external standard. The truth communicated by saying “That’s amazing” or “I love it” in response to a child’s drawing is never difficult to find or feel. Of course, things change when one is talking to an adult who wants to know how his work compares with the work of others. Here, we do our friends no favors by lying to them.
Strangely, the most common question I’ve received from readers on the topic of deception has been some version of the following:
What should we tell our children about Santa? My daughter asked if Santa was real the other day, and I couldn’t bear to disappoint her.
In fact, I’ve heard from several readers who seemed to anticipate this question, and who wrote to tell me how disturbed they had been when they learned that their parents had lied to them every Christmas. I’ve also heard from readers whose parents told the truth about Santa simply because they didn’t want the inevitable unraveling of the Christmas myth to cast any doubt on the divinity of Jesus Christ. I suppose some ironies are harder to detect than others.
I don’t remember whether I ever believed in Santa, but I was never tempted to tell my daughter that he was real. Christmas must be marginally more exciting for children who are duped about Santa—but something similar could be said of many phenomena about which no one is tempted to lie. Why not insist that dragons, mermaids, fairies, and Superman actually exist? Why not present the work of Tolkien and Rowling as history?
The real truth—which everyone knows 364 days of the year—is that fiction can be both meaningful and fun. Children have fantasy lives so rich and combustible that rigging them with lies is like putting a propeller on a rocket. And is the last child in class who still believes in Santa really grateful to have his first lesson in epistemology meted out by his fellow six-year-olds? If you deceive your children about Santa, you may give them a more thrilling experience of Christmas. What you probably won’t give them, however, is the sense that you would not and could not lie to them about anything else.
We live in a culture where the corrosive effect of lying is generally overlooked, and where people remain confused about the difference between truly harmless deceptions—such as the poetic license I took at the beginning of this article—and seemingly tiny lies that damage trust. I’ve written a short book about this. Its purpose is to convey, in less than an hour, one of the most significant ethical lessons I’ve ever learned: If you want to improve yourself and the people around you, you need only stop lying.
Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. His research explores how children and adults understand the physical and social world, with special focus on morality, religion, fiction, and art. He has won numerous awards for his research and teaching. He is a past president of the Society for Philosophy and Psychology and a co-editor of Behavioral and Brain Sciences, one of the major journals in the field. Dr. Bloom has written for scientific journals such as Nature and Science and for popular outlets such as The New York Times, The Guardian, The New Yorker, and The Atlantic. He is the author or editor of six books, including Just Babies: The Origins of Good and Evil.
Paul was kind enough to answer a few questions about his new book.
Harris: What are the greatest misconceptions people have about the origins of morality?
Bloom: The most common misconception is that morality is a human invention. It’s like agriculture and writing, something that humans invented at some point in history. From this perspective, babies start off as entirely self-interested beings—little psychopaths—and only gradually come to appreciate, through exposure to parents and schools and church and television, moral notions such as the wrongness of harming another person.
Now, this perspective is not entirely wrong. Certainly some morality is learned; this has to be the case because moral ideals differ across societies. Nobody is born with the belief that sexism is wrong (a moral belief that you and I share) or that blasphemy should be punished by death (a moral belief that you and I reject). Such views are the product of culture and society. They aren’t in the genes.
But the argument I make in Just Babies is that there also exist hardwired moral universals—moral principles that we all possess. And even those aspects of morality—such as the evils of sexism—that vary across cultures are ultimately grounded in these moral foundations.
A very different misconception sometimes arises, often stemming from a religious or spiritual outlook. It’s that we start off as Noble Savages, as fundamentally good and moral beings. From this perspective, society and government and culture are corrupting influences, blotting out and overriding our natural and innate kindness.
This, too, is mistaken. We do have a moral core, but it is limited—Hobbes was closer to the truth than Rousseau. Relative to an adult, your typical toddler is selfish, parochial, and bigoted. I like the way Kingsley Amis once put it: “It was no wonder that people were so horrible when they started life as children.” Morality begins with the genes, but it doesn’t end there.
Harris: How do you distinguish between the contributions of biology and those of culture?
Bloom: There is a lot you can learn about the mind from studying the fruit flies of psychological research—college undergraduates. But if you want to disentangle biology and culture, you need to look at other populations. One obvious direction is to study individuals from diverse cultures. If it turns out that some behavior or inclination shows up only in so-called WEIRD (Western Educated Industrial Rich Democratic) societies, it’s unlikely to be a biological adaptation. For instance, a few years ago researchers were captivated by the fact that subjects in the United States and Switzerland are highly altruistic and highly moral when playing economic games. They assumed that this reflects the workings of some sort of evolved module—only to discover that people in the rest of the world behave quite differently, and that their initial findings are better explained as a quirk of certain modern societies.
One can do comparative research—if a human capacity is shared with other apes, then its origin is best explained in terms of biology, not culture. And there’s a lot of fascinating research with apes and monkeys that’s designed to address questions about the origin of pro-social behavior.
Then there’s baby research. We can learn a lot about human nature by looking at individuals before they are exposed to school, television, religious institutions, and the like. The powerful capacities that we and other researchers find in babies are strong evidence for the contribution of biology. Now, even babies have some life history, and it’s possible that very early experience, perhaps even in the womb, plays some role in the origin of these capacities. I’m comfortable with this—my claim in Just Babies isn’t that the moral capacities of babies emerge without any interaction with the environment. That would be nuts. Rather, my claim is the standard nativist one: These moral capacities are not acquired through learning.
We should also keep in mind that failure to find some capacity in a baby does not show that it is the product of culture. For one thing, the capacity might be present in the baby’s mind but psychologists might not be clever enough to detect it. In the immortal words of Donald Rumsfeld, “Absence of evidence is not evidence of absence.” Furthermore, some psychological systems that are pretty plainly biological adaptations might emerge late in development—think about the onset of disgust at roughly the age of four, or the powerful sexual desires that emerge around the time of puberty. Developmental research is a useful tool for pulling apart biology and culture, but it’s not a magic bullet.
Harris: What are the implications of our discovering that many moral norms emerge very early in life?
Bloom: Some people think that once we know what the innate moral system is, we’ll know how to live our lives. For them it’s as if the baby’s mind contains a holy text of moral wisdom, written by Darwin instead of Yahweh, and once we can read it, all ethical problems will be solved.
This seems unlikely. Mature moral decision-making involves complex reasoning, and often the right thing to do involves overriding our gut feelings, including those that are hardwired. And some moral insights, such as the wrongness of slavery, are surely not in our genes.
But I do think that this developmental work has some interesting implications. For one thing, the argument in Just Babies is that, to a great extent, all people have the same morality. The differences that we see—however important they are to our everyday lives—are variations on a theme. This universality provides some reason for optimism. It suggests that if we look hard enough, we can find common ground with any other neurologically normal human, and that has to be good news.
Just Babies is optimistic in another way. The zeitgeist in modern psychology is pro-emotion and anti-reason. Prominent writers and intellectuals such as David Brooks, Malcolm Gladwell, and Jonathan Haidt have championed the view that, as David Hume famously put it, we are slaves of the passions. From this perspective, moral judgments and moral actions are driven mostly by gut feelings—rational thought has little to do with it.
That’s a grim view of human nature. If it were true, we should buck up and learn to live with it. But I argue in Just Babies that it’s not true. It is refuted by everyday experience, by history, and by the science of developmental psychology. Rational deliberation is part of our everyday lives, and, as many have argued—including Steven Pinker, Peter Singer, Joshua Greene, you, and me, in the final chapter of in Just Babies—it is a powerful force in driving moral progress.
Harris: When you talk about moral progress, it implies that some moralities are better than others. Do you think, then, that it is legitimate to say that certain individuals or cultures have the wrong morality?
Bloom: If humans were infinitely plastic, with no universal desires, goals, or moral principles, the answer would have to be no. But it turns out that we have deep commonalities, and so, yes, we can talk meaningfully about some moralities’ being better than others.
Consider a culture in which some minority is kept as slaves—tortured, raped, abused, bought and sold, and so on—and this practice is thought of by the majority as a moral arrangement. Perhaps it’s justified by reference to divine command, or the demands of respected authorities, or long-standing tradition. I think we’re entirely justified in arguing that they are wrong, and when we do this, we’re not merely saying “We like our way better.” Rather, we can argue that it’s wrong by pointing out that it’s wrong even for them—the majority who benefit from the practice.
Obstetricians used to deliver babies without washing their hands, and many mothers and babies died as a result. They were doing it wrong—wrong by their own standards, because obstetricians wanted to deliver babies, not kill them. Similarly, given that the humans in the slave society possess certain values and intuitions and priorities, they are acting immorally by their own lights, and they would appreciate this if they were exposed to certain arguments and certain facts.
Now, this is an empirical claim, drawing on assumptions about human psychology, but it’s supported by history. Good moral ideas can spread through the world in much the same way that good scientific ideas can, and once they are established, people marvel that they could ever have thought differently. Americans are no more likely to reinstate slavery than we are to give up on hand-washing for doctors.
You’ve written extensively on these issues in The Moral Landscape and elsewhere, and since we agree on so much, I can’t resist sounding a note of gentle conflict. Your argument is that morality is about maximizing the well-being of conscious minds. This means that determining the best moral system reduces to the empirical/scientific question of what system best succeeds at this goal. From this standpoint, we can reject a slave society for precisely the same reason we can reject a dirty-handed-obstetrician society—it involves needless human pain.
My view is slightly different. You’re certainly right that maximizing well-being is something we value, and needless suffering is plainly a bad thing. But there remain a lot of hard questions—the sort that show up in Ethics 101 and never go away. Are we aspiring for the maximum total amount of individual well-being or the highest average? Are principles of fairness and equality relevant? What if the slave society has very few unhappy slaves and very many happy slaveholders, so its citizens are, in total and on average, more fulfilled than ours? Is that society more moral? If my child needs an operation to save his sight, am I a better person if I let him go blind and send the money to a charity where it will save another child’s life? These are hard questions, and they don’t go away if we have a complete understanding of the empirical facts.
The source of these difficulties, I think, is that as reflective moral beings, we sometimes have conflicting intuitions as to what counts as morally good. If we were natural-born utilitarians of the Benthamite sort, then determining the best possible moral world really would be a straightforward empirical problem. But we aren’t, and so it isn’t.
Harris: Well, it won’t surprise you to learn that I agree with everything you’ve said up until this last bit. In fact, these last points illustrate why I choose not to follow the traditional lines laid down by academic philosophers. If you declare that you are a “utilitarian,” everyone who has taken Ethics 101, as you say, imagines that he understands the limits of your view. Unfortunately, those limits have been introduced by philosophers themselves and are enshrined in the way that we have been encouraged to talk about moral philosophy.
For instance, you suggest that a concern for well-being might be opposed to a concern for fairness and equality—but fairness and equality are immensely important precisely because they are so good at safeguarding the well-being of people who have competing interests. If someone says that fairness and equality are important for reasons that have nothing to do with the well-being of people, I have no idea what he is talking about.
Similarly, you suggest that the hard questions of ethics wouldn’t go away if we had a complete understanding of empirical facts. But we really must pause to appreciate just how unimaginably different things would be IF we had such an understanding. This kind of omniscience is probably impossible—but nothing in my account depends on its being possible in practice. All we need to establish a strong, scientific conception of moral truth in principle is to admit that there is a landscape of experiences that conscious beings like ourselves can have, both individually and collectively—and that some are better than others (in any and every sense of “better”). Must we really defend the proposition that an experience of effortless good humor, serenity, love, creativity, and awe spread over all possible minds would be better than everyone’s being flayed alive in a dungeon by unhappy devils? I don’t think so.
I agree that how we think about collective well-being presents certain difficulties (average vs. maximum, for instance)—but a strong conception of moral truth requires only that we acknowledge the extremes. It seems to me that the paradoxes that Derek Parfit has engineered here, while ingenious, need no more impede our progress toward increased well-being than the paradoxes of Zeno prevent us from getting to the coffee pot each morning. I admit that it can be difficult to say whether a society of unhappy egalitarians would be better or worse than one composed of happy slaveholders and none-too-miserable slaves. And if we tuned things just right, I would be forced to say that these societies are morally equivalent. However, one thing is not debatable (and it is all that my thesis as presented in The Moral Landscape requires): If you took either of these societies and increased the well-being of everyone, you would be making a change for the good. If, for instance, the slaveholders invented machines that could replace the drudgery of slaves, and the slaves themselves became happy machine owners—and these changes introduced no negative consequences that canceled the moral gains—this would be an improvement in moral terms. And any person who later attempted to destroy the machines and begin enslaving his neighbors would be acting immorally.
Again, the changes in well-being that are possible for creatures like ourselves are possible whether or not anyone knows about them, and their possibility depends in some way on the laws that govern the states of conscious minds in this universe (or any other).
Whatever its roots in our biology, I think we should now view morality as a navigation problem: How can we (or any other conscious system) reduce suffering and increase happiness? There might be an uncountable number of morally equivalent peaks and valleys on the landscape—but that wouldn’t undermine the claim that basking on some peak is better than being tortured in one of the valleys. Nor would it suggest that movement up or down depends on something other than the laws of nature.
Bloom: I agree with almost all of this. Sure—needless suffering is a bad thing, and increased well-being is a good thing, and that’s why I’m comfortable saying that some societies (and some individuals) have better moralities than others. I agree as well that determining the right moral system will rest in part on knowing the facts. This is true for the extremes, and it’s also true for real-world cases. The morality of drug laws in the United States, for instance, surely has a lot to do with whether those laws cause an increase or a decrease in human suffering.
My point was that there are certain moral problems that don’t seem to be solvable by science. You accept this but think that these are like paradoxes of metaphysics—philosophical puzzles with little practical relevance.
This is where we clash, because some of these moral problems keep me up at night. Take the problem of how much I should favor my own children. I spend money to improve my sons’ well-being—buying them books, taking them on vacations, paying dentists to fix their teeth, etc.—that could instead be used to save the lives of children in poor countries. I don’t need a neuroscientist to tell me that I’m not acting to increase the total well-being of conscious individuals. Am I doing wrong? Maybe so. But would you recommend the alternative, where (to use my earlier example) I let my son go blind so that I can send the money I would have paid for the operation to Oxfam so that another child can live? This seems grotesque. So what’s the right balance? How should we weigh the bonds of family, friendship, and community?
This is a serious problem of everyday life, and it’s not going to be solved by science.
Harris: Actually, I don’t think our views differ much. This just happens to be a place where we need to distinguish between answers in practice and answers in principle. I completely agree that there are important ethical problems that we might never solve. I also agree that there are circumstances in which we tend to act selfishly to a degree that beggars any conceivable philosophical justification. We are, therefore, not as moral as we might be. Is this really a surprise? As you know, the forces that rule us here are largely situational: It is one thing for you to toss an appeal from the Red Cross in the trash on your way to the ice cream store. It would be another for you to step over the prostrate bodies of starving children. You know such children exist, of course, and yet they are out of sight and (generally) out of mind. Few people would counsel you to let your own children go blind, but I can well imagine Peter Singer’s saying that you should deprive them of every luxury as long as other children are deprived of food. To understand the consequences of doing this, we would really need to take all the consequences into account.
I briefly discuss this problem in The Moral Landscape. I suspect that some degree of bias toward one’s own offspring could be normative in that it will tend to lead to better outcomes for everyone. Communism, many have noticed, appears to run so counter to human nature as to be more or less unworkable. But the crucial point is that we could be wrong about this—and we would be wrong with reference to empirical facts that we may never fully discover. To say that these answers will not be found through science is merely to say that they won’t be established with any degree of certainty or precision. But that is not to say that such answers do not exist. It is also possible to know exactly what we should do but to not be sufficiently motivated to do it. We often find ourselves in this situation in life. For example, a person desperately wants to lose weight and knows that he would be happier if he did. He also knows how to do it—by eating less junk and exercising more. And yet he may spend his whole life not doing what he knows would be good for him. In many respects, I think our morality suffers from this kind of lassitude.
But we can achieve something approaching moral certainty for the easy cases. As you know, many academics and intellectuals deny this. You and I are surrounded by highly educated and otherwise intelligent people who believe that opposition to the burqa is merely a symptom of Western provincialism. I think we agree that this kind of moral relativism rests on some very dubious (and unacknowledged) assumptions about the nature of morality and the limits of science. Let us go out on a scientific limb together: Forcing half the population to live inside cloth bags isn’t the best way to maximize individual or collective well-being. On the surface, this is a rather modest ethical claim. When we look at the details, however, we find that it is really a patchwork of claims about psychology, sociology, economics, and probably several other scientific disciplines. In fact, the moment we admit that we know anything at all about human well-being, we find that we cannot talk about moral truth outside the context of science. Granted, the scientific details may be merely implicit, or may remain perpetually out of reach. But we are talking about the nature of human minds all the same.
Bloom: We still have more to talk about regarding the hard cases, but I agree with you that there are moral truths and that we can learn about them, at least in part, through science. Part of the program of doing so is understanding human nature, and especially our universal moral sense, and this is what my research, and my new book, is all about.
I’ve noticed a happy trend in online video: People have begun to produce animations and mashups of public lectures that add considerable value to the spoken words. If you are unfamiliar with these visual essays, watch any of the RSA Animate videos, like the one below:
People have also taken excerpts from my own lectures and combined them with stock footage. For example:
I would like to encourage this behavior. To that end, I am offering the following voice-over tracks, adapted from one of my debates.
The first is an edited excerpt from the debate itself:
The second is a reading of a similar text:
These audio files are yours to use any way you see fit. And if you produce something especially creative, I will do my best to bring attention to your work.
Watch the above video. (Then watch it again.) And then read the (unedited and uncorrected) description of this footage written by the organizers of this Muslim “peace conference”:
When Muslim organizations invite Shaykhs who speak openly about the values of Islam, the Islamophobic western media starts murdering the character of that organization and the invited speaker. The question these Islamophobic journalists need to reflect upon is; are these so called ‘‘radical’’ views that they criticize endorsed only by these few individuals being invited around the globe, or does the common Muslims believe in them. If the common Muslims believe in these values that means that more or less all Muslims are radical and that Islam is a radical religion. Since this is not the case, as Islam is a peaceful religion and so are the masses of common Muslims, these Shaykhs cannot be radical. Rather it is Islamophobia from the ignorant western media who is more concerned about making money by alienating Islam by presenting Muslims in this way. Islam Net, an organization in Norway, invited 9 speakers to Peace Conference Scandinavia 2013. These speakers would most likely be labelled as ‘‘extremists’’ if the media were to write about the conference. But how come this conference was the largest Islamic Scandinavian International event that has taken place in Norway with about 4000 people attending? Were the majority of those who attended in opposition to what the speakers were preaching? If so, how come they paid to enter? Let’s forget about that for a moment, let’s imagine that we don’t really knew what all these people thought about for example segregation of men and women, or stoning to death of those who commit adultery. The Chairman of Islam Net, Fahad Ullah Qureshi asked the audience, and the answer was clear. The attendees were common Sunni Muslims. They did not consider themselves as radicals or extremists. They believed that segregation was the right thing to do, both men and women agreed upon this. They even supported stoning or whatever punishment Islam or prophet Muhammad (peace be upon him) commanded for adultery or any other crime. They even believed that these practises should be implemented around the world. Now what does that tell us? Either all Muslims and Islam is radical, or the media is Islamophobic and racist in their presentation of Islam. Islam is not radical, nor is Muslims in general radical. That means that the media is the reason for the hatred against Muslims, which is spreading among the non-Muslims in western countries.
This is a remarkable document. Read it closely, and you will pass through the looking glass. The organizers of this conference believe (with good reason) that “extremist” views are not rare among Muslims, even in the West. And they consider the media’s denial of this fact to be a symptom of… Islamophobia. The serpent of obscurantism has finally begun to devour its own tail. Apparently, it is a sign of racism to imagine that only a tiny minority of Muslims could actually condone the subjugation of women and the murder of apostates. How dare you call us “extremists” when we represent so many? We are not extreme. This is Islam. They have a point. And it is time for secular liberals and (truly) moderate Muslims to stop denying it.
A young man enters a public place—a school, a shopping mall, an airport—carrying a small arsenal. He begins killing people at random. He has no demands, and no one is spared. Eventually, the police arrive, and after an excruciating delay as they marshal their forces, the young man is brought down.
This has happened many times, and it will happen again. After each of these crimes, we lose our innocence—but then innocence magically returns. In the aftermath of horror, grief, and disbelief, we seem to learn nothing of value. Indeed, many of us remain committed to denying the one thing of value that is there to be learned.
After the Boston Marathon bombing, a journalist asked me, “Why is it always angry young men who do these terrible things?” She then sought to connect the behavior of the Tsarnaev brothers with that of Jared Loughner, James Holmes, and Adam Lanza. Like many people, she believed that similar actions must have similar causes.
But there are many sources of human evil. And if we want to protect ourselves and our societies, we must understand this. To that end we should differentiate at least four types of violent actor.
1. Those who are suffering from some form of mental illness that causes them to think and act irrationally. Given access to guns or explosives, these people may harm others for reasons that wouldn’t make a bit of sense even if they could be articulated. We may never hear Jared Loughner and James Holmes give accounts of their crimes, and we do not know what drove Adam Lanza to shoot his mother in the face and then slaughter dozens of children. But these mass murderers appear to be perfect examples of this first type. Aaron Alexis, the Navy Yard shooter, is yet another. What provoked him? He repeatedly complained that he was being bombarded with “ultra low frequency” electromagnetic waves. Apparently, he thought that killing people at random would offer some relief. It seems there is little to understand about the experiences of these men or about their beliefs, except as symptoms of underlying mental illness.
2. Prototypically evil psychopaths who feel no empathy for others and may even derive sadistic pleasure from making the innocent suffer. These people are not delusional. They are malignantly selfish, ruthless, and prone to violence. Our maximum-security prisons are full of such men. Given half a chance and half a reason, psychopaths will harm others—because that is what psychopaths do.
It is worth observing that these first two types trouble us for reasons that have nothing to do with culture, ideology, or any other social variable. Of course, it matters if a psychotic or a psychopath happens to be the head of a nation, or otherwise has power and influence. That is what is so abhorrent about North Korea: The child king is mad, or simply evil, and he’s building a nuclear arsenal while millions starve. But even here, very little is to be learned about what we—the billions of relatively normal human beings struggling to maintain open societies—are doing wrong. We didn’t create Jared Loughner (apart from making it too easy for him to get a gun), and we didn’t create Kim Jong-il (apart from making it too easy for him to get nuclear bombs). Given access to powerful weapons, such people will pose a threat no matter how rational, tolerant, or circumspect we become.
3. Normal men and women who cause immense harm while believing that they are doing the right thing—or while neglecting to notice the consequences of their actions. These people are not insane, and they’re not necessarily bad; they are just part of a system in which the negative consequences of ordinary selfishness and fear can become horribly magnified. Think of a soldier fighting in a war that may be ill conceived, or even unjust, but who has no rational alternative but to defend himself and his friends. Think of a boy growing up in the inner city who joins a gang for protection, only to perpetuate the very cycle of violence that makes gang membership a necessity. Or think of a CEO whose short-term interests motivate him to put innocent lives, the environment, or the economy itself in peril. Most of these people aren’t monsters. However, they can easily create suffering for others that only a monster would bring about by design. This is the true “banality of evil”—whatever Hannah Arendt actually meant by that phrase—but it is worth remembering that not all evil is banal.
4. Those who are moved by ideology to waste their lives in extraordinary ways while doing intolerable harm to others in the process. Some of these belief systems are merely political, or otherwise secular, in that their aim is to bring about specific changes in this world. But the worst of these doctrines are religious—whether or not they are attached to a mainstream religion—in that they are informed by ideas about otherworldly rewards and punishments, prophecies, magic, and so forth, which are especially conducive to fanaticism and self-sacrifice.
Of course, a person can inhabit more than one of the above categories at once—and thus have his antisocial behavior overdetermined. There must be someone somewhere who is simultaneously psychotic and psychopathic, part of a corrupt system, and devoted to a dangerous, transcendent cause. But many examples of each of these types exist in their pure forms.
For instance, in recent weeks, a spate of especially appalling jihadist attacks occurred—one on a shopping mall in Nairobi, where non-Muslims appear to have been systematically tortured before being murdered; one on a church in Peshawar; and one on a school playground in Baghdad, targeting children. Whenever I point out the role that religious ideology plays in atrocities of this kind—specifically the Islamic doctrines related to jihad, martyrdom, apostasy, and so forth—I am met with some version of the following: “Bad people will always do these things. Religion is nothing more than a pretext.” This is an increasingly dangerous misconception to have about the human mind.
Here is my pick for the most terrifying and depressing phenomenon on earth: A smart, capable, compassionate, and honorable person grows infected with ludicrous ideas about a holy book and a waiting paradise, and then becomes capable of murdering innocent people—even children—while in a state of religious ecstasy. Needless to say, this problem is rendered all the more terrifying and depressing because so many of us deny that it even exists.
To imagine that one is a holy warrior bound for Paradise might seem delusional, but we live in a world where perfectly sane people are led to believe such floridly crazy things in the name of religion. This is primarily a social and cultural issue, not a psychological one. There is no clear line between what members of the Taliban, al Qaeda, and al Shabab believe about Islam and the “true” Islam. In fact, these groups have as good a claim as any to being impeccable Muslims. This presents an enormous threat to civil society, which apologists for Islam and secular liberals can now be counted upon to obfuscate. A tsunami of stupidity and violence is breaking simultaneously on a hundred shores, and people like Karen Armstrong, Reza Aslan, Juan Cole, John Esposito, and Glenn Greenwald insist that it’s a beautiful day at the beach. Their determination that “moderate” Islam not be blamed for the acts of “extremists” causes them to deny that genuine (and theologically justifiable) religious beliefs can inspire psychologically normal people to commit horrific acts of violence.
For weeks after the Boston Marathon bombing, we seemed determined to remain confused about the motives of the perpetrators. Had they been “radicalized” by some nefarious person, or did they manage it themselves? Did Tamerlan, the older brother, have brain damage from boxing? Were his dreams dashed by our immigration laws? Experts on terrorism took to the airwaves and gave their analysis: These young men behaved as they did, not on account of Islam, but because they were “jerks” and “losers.”
Or was it just politics, with religion as a pretext? The New York Times reported that the Tsarnaev brothers were “motivated to strike against the United States partly because of its military actions in Iraq and Afghanistan.” Many people seized on this as proof that U.S. foreign policy was to blame. And yet the only plausible way that Chechens coming of age in America could want to murder innocent people in protest over the wars in Iraq and Afghanistan would be for them to accept the Islamic doctrine of jihad. Islam is under attack and it must be defended; infidels have invaded Muslim lands—these grievances are not political. They are religious.
The same obscurantism arose in response to the Woolwich murder—when two jihadists butchered a man on a London sidewalk while shouting “Allahu akbar!” Their actions were repeatedly described as “political”—and the role of Islam in their thinking was reflexively discounted. Why political? Because one of the murderers spoke of British troops in Afghanistan and Iraq invading “our lands” and abusing “our women.” Few seemed to wonder how a Londoner of Nigerian descent could feel possessive about Afghan and Iraqi lands and women. There is only one path through the wilderness of bad ideas that reaches such “political” concerns: Islam.
Take a moment to consider the actions of the Taliban gunman who shot Malala Yousafzai in the head. How is it that this man came to board a school bus with the intention of murdering a 15-year-old girl? Absent ideology, this could have only been the work of a psychotic or a psychopath. Given the requisite beliefs, however, an entire culture will support such evil. Malala is the best thing to come out of the Muslim world in a thousand years. She is an extraordinarily brave and eloquent girl who is doing what millions of Muslim men and women are too terrified to do—stand up to the misogyny of traditional Islam. No doubt the assassin who tried to kill her believed that he was doing God’s work. He was probably a perfectly normal man—perhaps even a father himself—and that is what is so disturbing. In response to Malala’s nomination for the Nobel Peace Prize, a Taliban spokesman had this to say:
Malala Yousafzai targeted and criticized Islam. She was against Islam and we tried to kill her, and if we get a chance again we will definitely try to kill her, and we will feel proud killing her.
The fact that otherwise normal people can be infected by destructive religious beliefs is crucial to understand—because beliefs spread. Until moderate Muslims and secular liberals stop misplacing the blame for this evil, they will remain part of the problem. Yes, our drone strikes in Pakistan kill innocent people—and this undoubtedly creates new enemies for the West. But we wouldn’t need to drop a single bomb on Pakistan, or anywhere else, if a death cult of devout Muslims weren’t making life miserable for millions of innocent people and posing an unacceptable threat of violence to open societies.
Malala did not win a Nobel prize this week, and it is probably good for her that she didn’t. She absolutely deserved it—far more than several recent recipients have—but this recognition would have made her security concerns even more excruciating than they probably are already. Her nomination is said to have noticeably increased anti-Western sentiment in Pakistan—a fact that deserves some honest reflection on the part of Islam’s apologists. If for nothing else, we can be grateful to the Taliban for reminding us of what so many civilized people seem eager to forget: This is both a war of ideas and a very bloody war—and we must win it.
I wrote an article on meditation two years ago, and since then many readers have asked for further guidance on how to practice. As I said in my original post, I generally recommend a method called vipassana in which one cultivates a form of attention widely known as “mindfulness.” There is nothing spooky or irrational about mindfulness, and the literature on its psychological benefits is now substantial. Mindfulness is simply a state of clear, nonjudgmental, and nondiscursive attention to the contents of consciousness, whether pleasant or unpleasant. Developing this quality of mind has been shown to reduce pain, anxiety, and depression; improve cognitive function; and even produce changes in gray matter density in regions of the brain related to learning and memory, emotional regulation, and self-awareness. I will cover the relevant philosophy and science in my next book Waking Up: A Guide to Spirituality Without Religion, but in the meantime, I have produced two guided meditations (9 minutes and 26 minutes) for those of you who would like to get started with the practice. Please feel free to share them.
Sean B. Carroll is the author of Remarkable Creatures, a finalist for the National Book Award; The Making of the Fittest, winner of the Phi Beta Kappa Science Book Award; and Endless Forms Most Beautiful. Carroll also writes a monthly feature, “Remarkable Creatures,” for the New York Times’ Science Times. An internationally known scientist and leading educator, Dr. Carroll currently heads the Department of Science Education of the Howard Hughes Medical Institute and is Professor of Molecular Biology and Genetics at the University of Wisconsin. His new book is Brave Genius: A Scientist, a Philosopher, and Their Daring Adventures from the French Resistance to the Nobel Prize.
From the Publisher:
“I have known only one true genius: Jacques Monod,” claimed Albert Camus. Known to biologists for his Nobel Prize–winning, pioneering genetic research, Monod is credited with some of the most creative and influential ideas in modern biology. But while a few texts mention in passing that Monod was “in the Resistance” and “friends with Albert Camus,” none have examined the impact of the chaos of war on his work, nor the camaraderie between these two extraordinary men—until now. In Brave Genius: A Scientist, a Philosopher, and Their Daring Adventures from the French Resistance to the Nobel Prize, leading evolutionary biologist and National Book Award finalist Sean B. Carroll draws on a wealth of previously unknown and unpublished material to tell the dramatic and inspiring story of the lives of two men who triumphed over overwhelming adversity to pursue the meaning of existence on every level from the molecular to the philosophical.
Sean was kind enough to answer a few questions about his new book.
How did you discover the story of Camus and Monod?
A couple of previous authors mentioned briefly that they were friends, but offered nothing more. It seemed obvious to me from Monod’s writing that Camus had influenced him a great deal, so I wanted to know how well they knew each other and for how long. The first sign that I might make some headway was a letter from Camus to Monod that a family member shared with me. The letter was very warm, and encouraged me to keep looking. That led to another letter, and then to some anecdotes from people who had been together with Camus and Monod. I was also able to unearth some interviews in which Monod discussed Camus. My favorite jackpot came when I asked Monod’s son Olivier whether his father’s Camus books were inscribed. He took a look. They were all personalized with revealing phrases, and out of one dropped another letter… in which Camus was asking for medical advice concerning his mistress’s father!
Eventually, I was able to determine that their friendship spanned more than a decade. More importantly, I was able to place their relationship in the context of other pivotal events in each man’s life. I was even able to determine from some unpublished notes that Camus lifted some of his arguments in his anti-Soviet essay The Rebel directly from Monod. That was very satisfying detective work.
Did the two men view politics in the same way?
Definitely. After the experience of the Occupation of France (during which they both had significant roles in the Resistance), they each felt an obligation to speak out against injustice and oppression. For example, they both publicly condemned the Soviet regime and its French supporters. They both strongly supported the Hungarian revolution – Camus in print, Monod by smuggling scientists out of the country. They also both opposed the death penalty, and that movement eventually succeeded in getting the death penalty abolished in the EU.
Importantly, Camus’s friendship with Monod was blossoming at the same time that Camus’s friendship with Sartre was imploding, and for the same reason – Camus’ s unequivocal condemnation of the Soviet Union as a totalitarian state ruled by a delusional dictator.
What was Camus’s position on the French presence in Algeria?
Camus was neither on the side of the Algerian militants seeking independence from France, nor on the side of the French government that fought the militants, and often imposed very harsh punishments, including the death penalty. Camus thought that there should be some middle ground of greater autonomy for Algeria, but also that the country, so long a part of France and settled by many French, maintain a connection to France. Criticized by both sides, Camus decided to become silent on the matter publicly, and to work behind the scenes.
What did Monod and Camus have in common intellectually?
They shared the drive to push logic as far as one could. Camus did so in The Myth of Sisyphus, Monod in his scientific life and in his book Chance and Necessity. Both men were convinced that this life is the only one we get. Each was keen to pursue the implications of that conclusion.
Monod was influenced a great deal by Camus. The epigraph of Chance and Necessity was the closing paragraphs of The Myth of Sisyphus. Monod thought that Camus had provided the key to living with the knowledge of our finite lifetimes: “The struggle itself toward the heights is enough to fill a man’s heart.”
How did Camus and Monod view the relationship, or conflict, between religion and science?
Camus looked at religion from a philosophical perspective rather than a scientific one. He considered any appeal to religious faith “philosophical suicide,” an abandonment of reason. He thought that the principal issue was how to live this life fully, the only one we are sure to have. Camus’s recipe for living life to the fullest was to do nothing in the hope of an afterlife, but to rely on courage and reason.
In his book Chance and Necessity, Monod expounded at length on the conflict between science and religion. He saw religion as a collection of primitive myths that had been blown to shreds by science. After his experiences in the war and with Soviet ideology, Monod declared that his life’s goal was “a crusade against antiscientific, religious metaphysics, whether it be from Church or State.”
At every turn, Monod emphasized the role of chance in human existence, an idea that is antithetical to essentially every religious doctrine that places humans as some inevitable intention of a Creator. “Man was the product of an incalculable number of fortuitous events,” Monod argued, “the result of a huge Monte-Carlo game, where our number eventually did come out, when it might not well have appeared.”
Bolstered by the new evidence from molecular biology of a universal genetic code and the random nature of the mutation process, Monod delivered the bad news for every religion that asserts some form of Design:
Chance alone is at the source of every innovation, of all creation in the biosphere…this central concept of modern biology is no longer one among other possible hypotheses…it is the only one that squares with observed and tested fact. And nothing warrants the… hope that on this score our position is likely ever to be revised. There is no scientific concept, in any of the sciences, more destructive of anthropocentrism than this one.
I am very happy to announce that my wife and editor, Annaka Harris, has published her first book. The purpose of I Wonder is to teach very young children (and their parents) to cherish the feeling of “not knowing” as the basis of all discovery. In a world riven by false certainties, I can think of no more important lesson to impart to the next generation.
Advance Praise for I Wonder:
“I Wonder offers crucial lessons in emotional intelligence, starting with being secure in the face of uncertainty. Annaka Harris has woven a beautiful tapestry of art, storytelling, and profound wisdom. Any young child—and parent—will benefit from sharing this wondrous book together.”
—Daniel Goleman, author of the #1 bestseller Emotional Intelligence
“What an enchanting children’s book – beautiful to look at, charming to read, and with a theme that wonderers of all ages should appreciate.”
—Steven Pinker, Professor of Psychology, Harvard University, and author of How the Mind Works
“I Wonder captures the beauty of life and the mystery of our world, sweeping child and adult into a powerful journey of discovery. This is a book for children of all ages that will nurture a lifelong love of learning. Magnificent!”
—Daniel Siegel, author of Mindsight and The Whole-Brain Child
“I Wonder is a delightful book that explores and encourages the playful beginnings of wonder and a joyful appreciation of natural mystery.”
—Eric Litwin, author of the #1 New York Times bestselling children’s book, I Love My White Shoes and Pete the Cat
“This marvelous book will successfully sustain and stimulate your child’s natural sense of curiosity and wonder about this mysterious world we live in.”
—V.S. Ramachandran, author of The Tell-Tale Brain: A Neuroscientist’s Quest for What Makes Us Human
“I Wonder is a reminder to parents and their children that mysteries are a gift and that curiosity and wonderment are the treasures of a childlike mind.”
—Janna Levin, Professor of Physics and Astronomy, Columbia University, and author of How The Universe Got Its Spots
“I Wonder teaches the very young that we should marvel at the mysteries of the universe and not be afraid of them. Our world would be a lot better if every human understood this. Start with your own children and this book.”
—Jeff Hawkins, founder of Palm, Handspring, and the Redwood Neuroscience Institute, and author of On Intelligence
It has been nearly three years since The Moral Landscape was first published in English, and in that time it has been attacked by readers and nonreaders alike. Many seem to have judged from the resulting cacophony that the book’s central thesis was easily refuted. However, I have yet to encounter a substantial criticism that I feel was not adequately answered in the book itself (and in subsequent talks).
So I would like to issue a public challenge. Anyone who believes that my case for a scientific understanding of morality is mistaken is invited to prove it in under 1,000 words. (You must address the central argument of the book—not peripheral issues.) The best response will be published on this website, and its author will receive $2,000. If any essay actually persuades me, however, its author will receive $20,000,* and I will publicly recant my view.
Submissions will be accepted here the week of February 2-9, 2014.
*Note 9/1/13: The original prize was $1,000 for the winning essay and $10,000 for changing my view, but a generous reader has made a matching pledge.
1. You have said that these essays must attack the “central argument” of your book. What do you consider that to be?
Here it is: Morality and values depend on the existence of conscious minds—and specifically on the fact that such minds can experience various forms of well-being and suffering in this universe. Conscious minds and their states are natural phenomena, fully constrained by the laws of the universe (whatever these turn out to be in the end). Therefore, questions of morality and values must have right and wrong answers that fall within the purview of science (in principle, if not in practice). Consequently, some people and cultures will be right (to a greater or lesser degree), and some will be wrong, with respect to what they deem important in life.
You might want to read what I’ve already written in response to a few critics. (A version of that article became the Afterword to the paperback edition of The Moral Landscape.) I also recommend that you watch the talk I linked to above.
2. Can you give some guidance as to what you would consider a proper demolition of your thesis?
If you show (1) that my “worst possible misery for everyone” argument fails, or (2) that other branches of science are self-justifying in a way that a science of morality could never be, or (3) that my analogy to a landscape of multiple peaks and valleys is fatally flawed, or (4) that the fact/value distinction holds in a way that I haven’t yet understood, —you stand a very good chance of torpedoing my argument and changing my mind.
3. What sort of criticism is likely to be ineffective?
You will not win this prize if you attack views I don’t actually hold—which you will probably do if you fail to notice the distinction I make between answers in practice and answers in principle, or if you narrowly define science to mean finding the former while wearing a white lab coat, or if you imagine me to be saying that scientists are more moral than farmers and bricklayers, or if, like the philosopher Patricia Churchland, you manage to do all those things with an air of scornful pomposity appropriate to a Monty Python routine.
4. How can a person be expected to refute a book in 1,000 words or less?
If my core arguments are as misguided as many people apparently believe, it should be easy. However, I have imposed this word limit merely to make the job of vetting the entries manageable. Assuming that the winning essay is a good one, it will most likely serve as an opening statement in a longer exchange. I will give the winning author every reasonable opportunity to persuade me and claim the larger prize.
5. Perhaps I’m being too cynical, but I don’t think anyone will win any money here.
Well, assuming that I receive a single publishable essay, someone is bound to win $2,000. So, yes, you are being too cynical.
6. What do you hope will come from this contest?
I hope to receive at least one essay that presents a very serious challenge to my view. And then I hope to answer that challenge successfully, in a responding essay, or in a written exchange with the author. The second-best case (from my perspective) would be to confront a criticism that I can’t answer, but which I recognize as fatal to my thesis. I would then concede defeat and pay the author the tenfold prize. More generally, I hope to inspire readers—especially students—to debate these ideas.
7. Shouldn’t you provide “official rules” for this contest so that some lunatic who writes an essay channeling Aleister Crowley can’t sue you for not recognizing his brilliance?
Good point. Here are the official rules.
8. Why won’t you accept any submissions before February 2, 2014?
A few reasons: I want to encourage people to do quality work; I don’t want to receive multiple drafts; and I won’t have time to read any submissions until then.
9. With you as the judge, how can we trust that the best attack on your thesis will see the light of day?
Having fielded several accusations that this contest will be rigged—if not by design, then by my own ignorance and bias—I reached out to the philosopher Russell Blackford for help. Russell is among the most energetic critics of The Moral Landscape, and I am very happy to say that he has agreed to judge the submissions, introduce the winning essay, and evaluate my response.
Of course, only I can say whether I find the winning essay persuasive enough to trigger a change in my position (and the larger prize). But if I’m not persuaded, I’ll have to explain why, and Russell will be there to see that I do so without dodging any important points.
10. Why are you issuing this challenge? Science isn’t advanced by contests. Isn’t there something tawdry about offering money for someone to critique your work? This looks like nothing more than a publicity stunt designed to sell books. If you were serious about engaging with your philosophical critics, you would submit your work to an academic journal of philosophy.
I disagree on all counts. Contests are fun and motivating. And, the truth is, I don’t expect to sell many books by issuing this challenge (certainly not enough to cover the grand prize, should the winning essay change my view). I have also given essayists all the resources they need (a lecture and an article) to attack my position. This isn’t about selling books; it’s about engaging with readers—especially those who remain critical of my position on moral truth.
As for the claim that philosophical debates are best pursued in academic journals, I think you are mistaken. More people will read the winning essay on my blog than are likely to read any paper published in an academic journal next year—or any year thereafter. This is not a boast about how much traffic my blog gets, merely a statement about how few people read academic journals. Ideas matter—and philosophy is the art of thinking about them rigorously. In my view, that should be done in as public a forum as possible.