You might think that it would be a huge moral triumph to create a society of millions of actually conscious, happy beings inside one's computer, who think they are living, peacefully and comfortably, in the base level of reality -- Eden, but better! Divinity done right!
On the other hand, there might be something creepy and problematic about playing God in that way. Arguably, such creatures should be given self-knowledge, autonomy, and control over their own world -- but then we might end up, again, with evil, or even with an entity both intellectually superior to us and hostile.
[For Scott's and my first go-round on these issues, see here.]
As I write near the end of that paper:
The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).
One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.
I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.
First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.
Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.
(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)
So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.
Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).
Update, November 25 [revised 5:24 pm]:
Paul Boghossian writes:
I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!
I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.
Roger White then made the same point without using the brain in a vat scenario.
Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.
Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.
However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.
Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.
Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.
The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.
Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?
Normally, when experts disagree about some proposition, doubt about that proposition is the most reasonable response. Not always, though! Plausibly, one might disregard a group of experts if those experts are: (1.) a tiny minority; (2.) plainly much more biased than the remaining experts; (3.) much less well-informed or intelligent than the remaining experts; or (4.) committed to a view that is so obviously undeserving of credence that we can justifiably disregard anyone who espouses it. None of these four conditions seems to apply to dissent within the metaphysics of mind. (Maybe we could exclude a few minority positions for such reasons, but that will hardly resolve the issue.)
Thomas Kelly (2005) has argued that you may disregard peer dissent when you have “thoroughly scrutinized the available evidence and arguments” on which your disagreeing peer’s judgment is based. But we cannot disregard peer disagreement in philosophy of mind on the grounds that this condition is met. The condition is not met! No philosopher has thoroughly scrutinized the evidence and arguments on which all of her disagreeing peers’ views are based. The field is too large. Some philosophers are more expert on the literature on a priori metaphysics, others on arguments in the history of philosophy, others on empirical issues; and these broad literatures further divide into subliteratures and sub-subliteratures with which philosophers are differently acquainted. You might be quite well informed overall. You’ve read Jackson’s (1986) Mary argument, for example, and some of the responses to it. You have an opinion. Maybe you have a favorite objection. But unless you are a serious Mary-ologist, you won’t have read all of the objections to that argument, nor all the arguments offered against taking your favorite objection seriously. You will have epistemic peers and probably epistemic superiors whose views are based on arguments which you have not even briefly examined, much less thoroughly scrutinized.
Furthermore, epistemic peers, though overall similar in intellectual capacity, tend to differ in the exact profile of virtues they possess. Consequently, even assessing exactly the same evidence and arguments, convergence or divergence with one’s peers should still be epistemically relevant if the evidence and arguments are complicated enough that their thorough scrutiny challenges the upper range of human capacity across several intellectual virtues – a condition that the metaphysics of mind appears to meet. Some philosophers are more careful readers of opponents’ views, some more facile with complicated formal arguments, some more imaginative in constructing hypothetical scenarios, etc., and world-class intellectual virtue in any one of these respects can substantially improve the quality of one’s assessments of arguments in the metaphysics of mind. Every philosopher’s preferred metaphysical position is rejected by a substantial proportion of philosophers who are overall approximately as well informed and intellectually virtuous as she is, and who are also in some respects better informed and more intellectually virtuous than she is. Under these conditions, Kelly’s reasons for disregarding peer dissent do not apply, and a high degree of confidence in one’s position is epistemically unwarranted.
Adam Elga (2007) has argued that you can discount peer disagreement if you reasonably regard the fact that the seeming-peer disagrees with you as evidence that, at least on that one narrow topic, that person is not in fact a full epistemic equal. Thus, a materialist might see anti-materialist philosophers of mind, simply by the virtue of their anti-materialism, as evincing less than a perfect level-headedness about the facts. This is not, I think, entirely unreasonable. But it's also fully consistent with still giving the fact of disagreement some weight as a source of doubt. And since your best philosophical opponents will exceed you in some of their intellectual virtues and know some facts and arguments, which they consider relevant or even decisive, which you have not fully considered, you ought to give the fact of dissent quite substantial weight as a source of doubt.
Imagine an array of experts betting on a horse race: Some have seen some pieces of the horses’ behavior in the hours before the race, some have seen other pieces; some know some things about the horses’ performance in previous races, some know other things; some have a better eye for a horse’s mood, some have a better sense of the jockeys. You see Horse A as the most likely winner. If you learn that other experts with different, partly overlapping evidence and skill sets also favor Horse A, that should strengthen your confidence; if you learn that a substantial portion of those other experts favor B or C instead, that should lessen your confidence. This is so even if you don’t see all the experts quite as peers, and even if you treat an expert’s preference for B or C as grounds to wonder about her good judgment.
Try this thought experiment. You are shut in a seminar room, required to defend your favorite metaphysics of mind for six hours (or six days, if you prefer) against the objections of Ned Block, David Chalmers, Daniel Dennett, and Saul Kripke. Just in case we aren’t now living in the golden age of metaphysics of mind, let’s add Kant, Leibniz, Hume, Zhu Xi, and Aristotle too. (First we’ll catch them up on recent developments.) If you don’t imagine yourself emerging triumphant, then you might want to acknowledge that the grounds for your favorite position might not really be very compelling.
It is entirely possible to combine appropriate intellectual modesty with enthusiasm for a preferred view. Consider everyone’s favorite philosophy student: She vigorously champions her opinions, while at the same time being intellectually open and acknowledging the doubt that appropriately flows from her awareness that others think otherwise, despite those others being in some ways better informed and more capable than she is. Even the best professional philosophers still are such students, or should aspire to be, only in a larger classroom. So pick a favorite view! Distribute one’s credences differentially among the options. Suspect the most awesome philosophers of poor metaphysical judgment. But also: Acknowledge that you don't really know.
The Spelunker Illusion, well-known among cave explorers, is this: In absolute darkness, you wave your hand before your eyes. Many people report seeing the motion of the hand, despite the absolute darkness. If a friend waves her hand in front of your face, you don't see it.
I see three possible explanations:
(1.) The brain's motor output and your own proprioceptive input create hints of visual experience of hand motion.
(2.) Since you know you are moving your hand, you interpret low-level sensory noise in conformity with your knowledge that your hand is in such-and-such a place, moving in such-and-such a way, much as you might see a meaningful shape in a random splash of line segments.
(3.) There is no visual experience of motion at all, but you mistakenly think there is such experience because you expect there to be. (Yes, I think you can be radically wrong about your own stream of sensory experience.)
Dieter and colleagues had participants wave their hands in front of their faces while blindfolded. About a third reported seeing motion. (None reported seeing motion when the experimenter waved his hand before the participants.) Dieter and colleagues add two interesting twists: One is that they add a condition in which participants wave a cardboard silhouette of a hand rather than the hand itself. Under these conditions the effect remains, almost as strong as when the hand itself is waved. The other twist is that they track participants' eye movements.
Eye movements tend to be jerky, jumping around the scene. One exception to this, however, is smooth pursuit, when one stabilizes ones gaze on a moving object. This is not under voluntary control: Without an object to track, most people cannot move their eyes smoothly even if they try. In 1997, Katsumi Watanabe and Shinsuke Shimojo found that although people had trouble smoothly moving their eyes in total darkness, they could do so if they were trying to track their ("invisible") hand motion in darkness. Dieter and colleagues confirmed smooth hand-tracking in blindfolded participants and, strikingly, found that participants who reported sensations of hand motion were able to move their eyes much more smoothly than those who reported no sensations of motion.
I'm a big fan of corroborating subjective reports about consciousness with behavioral measures that are difficult to fake, so I love this eye-tracking measure. I believe that it speaks pretty clearly against hypothesis (3) above.
Dieter and colleagues embrace hypothesis (1): Participants have actual visual experience of their hands, caused by some combination of proprioceptive inputs and efferent copies of their motor outputs. However, it's not clear to me that we should exclude hypothesis (2). And (1) and (2) are, I think, different. People's experience in darkness is not merely blank or pure black, but contains a certain amount (perhaps a lot) of noise. Hypothesis (2) is that the effect arises "top down", as it were, from one's high-level knowledge of the position of one's hand. This top-down knowledge then allows you to experience that noisy buzz as containing motion -- perhaps changing the buzz itself, or perhaps not. (As long as one can find a few pieces of motion in the noise to string together, one might even fairly smoothly track that motion with one's eyes.)
Here's one way to start to pull (1) apart from (2): Have someone else move your hand in front of your face, so that your hand motion is passive. Although this won't eliminate proprioceptive knowledge of one's hand position, it should eliminate the cues from motor output. If efferent copies of motor output drive the Spelunker Illusion, then the Spelunker Illusion should disappear in this condition.
Another possibility: Familiarize participants with a swinging pendulum synchronized with a sound, then suddenly darken the room. If hypothesis (2) is correct and the sound is suggestive enough of the pendulum's exact position, perhaps participants will report still visually experiencing that motion.
"Two people at once" isn't how Nagata puts it. In her terminology, one being, the original person, continues in standard embodied form, while another being, a "ghost" -- inhabits some other location, typically someone else's "atrium". Suppose you want to have an intimate conversation long-distance. In Nagata's world, you can do it like this: Create a duplicate of your entire psychology (memories, personality traits, etc. -- for the sake of argument, let's allow that this can be done) and transfer that information to someone else. The recipient then implements your psychology in a dedicated processing space, her atrium. At the same time, your physical appearance is overlaid upon the recipient's sensory inputs. To her (though to no one else around) it will look like you are in the room. The person hosting you in her atrium will then interact with you, for example by saying "Hi, long time no see!" Her speech will be received as inputs to the virtual ghost-you in her atrium, and this ghost-you will react in just the same way you would react, for example by saying "You haven't aged a bit!" and stepping forward for a hug. Your host will then experience that speech overlaid on her auditory inputs, your bodily movement overlaid on her visual inputs, and the warmth of your hug overlaid on her tactile inputs. She will react accordingly, and so forth.
The ghost in the atrium will, of course, consciously experience all this (no Searlean skepticism about conscious AI here). When the conversation is over, the atrium will be emptied and the full memory of these experiences will be messaged back to the original you. The original you -- which meanwhile has been having its own stream of experiences -- will accept the received memories as autobiographical. The newly re-merged you, on Earth, will remember that conversation you had on Mars, which occurred on the same day you were also busy doing lots of other things on Earth.
If you know the personal identity literature in philosophy, you might think of instantiating the ghost as a "fission" case -- a case in which one person splits into two different people, similar to the case of having each hemisphere of your brain is transplanted separately into a different body, or the case of stepping into a transporter on Earth and having copies of you emerge simultaneously on Mars and Venus to go their separate ways ever after. Philosophers usually suppose that such fissions produce two distinct identities.
The Nagata case is different. You fission, and both of the resulting fission products know they are going to merge back together again; and then once they do merge, both strands of the history are regarded equally as part of your autobiography. The merged entity regards itself as being responsible for the actions of the split-off ghost -- can be embarrassed by its gaffes, held to its promises, and prosecuted for its crimes, and it will act out the ghost's decisions without needing to rethink them.
Contrast assimilation into the Borg of the Star Trek universe. The Borg, a large group entity, absorbs the memories of various assimilated beings (like individual human beings). But the Borg treats the personal history of the assimilated being non-autobiographically -- for example without accepting responsibility for the assimilated entity's past actions and plans.
What makes the difference between an identity-preserving fission-and-merge and an identity-breaking fission-and-merge is, I propose, the entities' implicit and explicit attitudes about the merge. If pre-fission I think "I am going to be Eric Schwitzgebel, in two places", and then in the fissioned state I think "I am here but another copy of me is also running elsewhere", and then after fusion I think "Both of those Eric Schwitzgebels are equally part of my own past" -- and if I also implicitly accept all this, e.g., by not feeling compelled to rethink one Eric Schwitzgebel's decisions more than the other's -- and perhaps especially if the rest of society shares my view of these matters, then I have been one entity in two places.
To see that this is really about the content of the relevant attitudes and not about, say, the kind of continuity of memory, values, and personality usually emphasized in psychological approaches to personal identity, consider what would happen if I had a very different attitude toward ghosts. If I saw the ghost as a mere slave distinct from me, then during the split my ghost might be thinking "damn, I'm only a ghost and my life will expire at the end of this conversation"; and after the merge, I'll tend to think of my ghost's behaviors as not really having been my own, despite my memories of those behaviors from a first-person point of view. The ghost will not bothered having made decisions or promises intending to bind me, knowing I would not accept them as my own if he did. And I'll be embarrassed by the ghost's behavior not in the same way I would be embarrassed by my own behavior but instead in something like the way I would be embarrassed by a child's or employee's behavior -- especially, perhaps, if the ghost does something that I wouldn't have done in light of its knowledge that, being merely a ghost, it would imminently die. The metaphysics of identity will thus turn upon the participant beings' attitudes about what preserves identity.
What is the point of moral reflection?
If the point is to discover what is really morally the case -- well, there's reason to doubt that philosophical styles of moral reflection are highly effective at achieving that goal. Philosophers' moral theories are often simplistic, problematic, totalizing -- too rigid in some places, too flexible in others, recruitable for clever justifications of noxious behavior, from sexual harassment to Nazism to sadistic parenting choices. Uncle Irv, who never read Kant or Mill and has little patience for the sorts of intellectual exercises we philosophers love, might have much better moral knowledge than most philosophers; and you and I might have had better moral knowledge than we do, had we shared his skepticism about philosophy.
If the point of philosophical moral reflection is to transform oneself into a morally better person -- well, there are reasons to doubt it has that effect, too.
But I would not give it up. I would not give it up, even at some moderate cost to my moral knowledge and moral behavior. Uncle Irv is missing something. And a world of Uncle Irvs would be a world vastly worse than this world, in a way I care about -- much as, perhaps, a world without metaphysical speculation would be worse than this world, even if metaphysical speculation is mostly bunk, or a world without bad art would be worse than this world or a world of a hundred billion contented cows would be worse than this world.
If I think about what I want in a world, I want people struggling to think through morality, even if they mostly fail -- even if that struggle rather more often brings them down than up.
The jerk is someone who culpably fails to respect the perspectives of other people around him, treating them as tools to be manipulated or idiots to be dealt with, rather than as moral and epistemic peers.The characteristic phenomenology of the jerk is "I'm important and I'm surrounded by idiots!" To the jerk, it's a felt injustice that he must wait in the post-office line like anyone else. To the jerk, the flight attendant asking him to hang up his phone is a fool or a nobody unjustifiably interfering with his business. Students and employees are lazy complainers. Low-level staff failed to achieve meaningful careers through their own incompetence. (If the jerk himself is in a low-level position, it's either a rung on the way up or the result of injustices against him.)
My thought today is: It is partly constitutive of being a jerk that the jerk lacks moral self-knowledge of his jerkitude. Part of what it is to fail to respect the perspectives of others around you is to fail to see your dismissive attitude toward them as morally inappropriate. The person who disregards the moral and intellectual perspectives of others, if he also acutely feels the wrongness of doing so -- well, by that very token, he exhibits some non-trivial degree of respect for the perspectives of others. He is not the picture-perfect jerk.
It is possible for the picture-perfect jerk to acknowledge, in a superficial way, that he is a jerk. "So what, yeah, I'm a jerk," he might say. As long as this label carries no real sting of self-disapprobation, the jerk's moral self-ignorance remains. Maybe he thinks the world is a world of jerks and suckers and he is only claiming his own. Or maybe he superficially accepts the label "jerk", without accepting the full moral loading upon it, as a useful strategy for silencing criticism. It is exactly contrary to the nature of the jerk to sympathetically imagine moral criticism for his jerkitude, feeling shame as a result.
Not all moral vices are like this. The coward might be loathe to confront her cowardice and might be motivated to self-flattering rationalization, but it is not intrinsic to cowardice that one fails fully to appreciate one’s cowardice. Similarly for intemperance, cruelty, greed, dishonesty. One can be painfully ashamed of one’s dishonesty and resolve to be more honest in the future; and this resolution might or might not affect how honest one in fact is. Resolving does not make it so. But the moment one painfully realizes one’s jerkitude, one already, in that very moment and for that very reason, deviates from the profile of the ideal jerk.
There's an interesting instability here: Genuinely worrying about its being so helps to make it not so; but then if you take comfort in that fact and cease worrying, you have undermined the basis of that comfort.
There are two types of alternative account. One alternative approach is, shall we say, deep: To desire something, on a deep account, is to be in some particular brain state or to have some underlying representational structure in the mind (perhaps the representation "I eat chocolate cake" in the Desire Box). The problem with such deep accounts is, I believe, that they don't get to the metaphysical root.
Consider an alien case. Suppose some Deep Structure D is necessary for wanting chocolate cake, on some deep account of desire. Unless that structure is more or less tantamount to possessing the dispositional profile constitutive (on my account) of wanting chocolate cake, then it should be metaphysically possible for an alien species that lacks Deep Structure D to act and react, inwardly and outwardly, in every respect as though it wanted chocolate cake. In such a case, I would suggest, both ordinary common sense and good philosophy advises ascribing the desire for chocolate cake to such hypothetical aliens, despite their lacking whatever Deep Structure D is necessary in the human case.
Alternatively, suppose some Deep Structure E is held to be sufficient for wanting chocolate cake. It seems that we could construct, at least hypothetically, a possible case in which Deep Structure E is present but the person in no way acts or reacts, inwardly or outwardly, like someone who wants chocolate cake: She wouldn't seek it, she wouldn't enjoy eating it, the anticipation of eating it would give her no pleasure, she gives it no weight in her plans, etc. It seems that we should say, in such cases, that the person does not desire chocolate cake. In ascribing desire or its lack, what we care about, both as ordinary folks and as philosophers, is how the person would act and react across a wide variety of possible circumstances. It is only contingently important what underlying mechanisms implement that pattern of action and reaction.
A second type of alternative approach is, like my own approach, superficial rather than deep, but unlike my approach it is narrow. What matters, on such accounts, is just some sub-portion of the pattern that matters on my approach. Maybe what is essential is that the person would choose the cake if given the chance, and not whether the person thinks she wants it or would feel anticipation when about to get it or would enjoy eating it. Or maybe what is essential is that the person judges that it would be good to get cake, and all the rest is incidental. Or maybe the essence is that receiving chocolate cake would be rewarding to that person. Or.... (See Tim Schroeder's SEP entry on Desire for a review of various narrow accounts, which Schroeder contrasts with holistic accounts like my own.)
The problem with narrow accounts is that it's hard to see a good justification for picking out just one feature of the profile as the essential bit. Desire is more usefully regarded as a syndrome of lots of things that tend to go together -- like extraversion is a syndrome, or like being happy-go-lucky is a syndrome. We can be liberal about what goes into the profile. It can be a cluster concept; aspects of the syndrome might be more or less central or important to the picture, but there need be no one essential piece that is strictly necessary or sufficient.
The flexible minimalism of a liberal, dispositional approach is, I think, nicely displayed when we consider messy, in-between cases. So let's consider one.
Matthew the envious buddy. Matthew and Rajan were pals in philosophy grad school. Ten years out, they still consider themselves close friends. They exchange friendly emails, comment warmly on each other’s Facebook posts, and seek each other for tête-à-têtes at professional meetings. In most respects, they are typical ageing grad-school best buddies. Also perhaps not atypically, one has had much more professional success than the other. Rajan was hired straight into a prestigious tenure-track position. He published a string of well-regarded articles which earned him quick tenure and, recently, promotion to full Professor. Now he is considering a prestigious job offer from another leading department. Matthew, in contrast, struggled through three temporary positions before finally landing a job at a relatively unselective state school. He has published a couple of articles and book reviews, suffered some ugly department politics, and is now facing an uncertain tenure decision. Understandably, Matthew is somewhat envious of Rajan – a fact he explicitly admits to Rajan over afternoon coffee in the conference hotel. Rajan is finishing his first book project and Matthew is halfway through reading Rajan’s draft.We can, of course, add as much detail to this case as we want -- dispositions pointing in different directions, in whatever balance we wish.
Matthew, as I’m imagining him, is not generally an envious character; he has a generous spirit. The well-wishes he utters to Rajan are sincerely felt at the time of utterance, not a sham. Picturing Rajan as the next David Lewis makes Matthew smile and chuckle with a good-natured shake of the head. There would be something truly cool about that, Matthew thinks – though the fact that he explicitly thinks that thought in that particular way already reveals a kind of ambivalence. Matthew intends to give Rajan his best advice about book revisions. He plans to recommend the book warmly to influential people he knows, including the program chair of the Pacific Division APA. At the same time, though, it’s true that were Matthew to read a devastating review of Rajan’s book, he would feel a kind of shameful pleasure, while seeing a glowing review in a top venue would bring a painful pang. In drafting out thoughts about the book, Matthew finds himself sometimes resentful of the effort, and he finds himself somewhat unhappy when he reads a particularly fresh and clever argument in the draft, wishing he had come up with that argument himself instead – though when he notices this about himself, he rebukes himself sharply. If Rajan’s book were to flop, Matthew would love commisserating; if Rajan’s book were to be a great success, that would add to the growing distance between the two friends. In some moments, Matthew admits to himself that he doesn’t really know if he wants the book to succeed or not.
Question: Does Matthew want Rajan's book to succeed?
The best answer, I submit, if we've built the case as I've intended, is "kind of", or "it's an intermediate, messy case". Just as someone might be an extravert in some respects and an introvert in other respects so that neither a plain ascription of "extravert" nor a plain ascription of "introvert" is quite right, so also with the question of whether Matthew wants Rajan's book to succeed. A liberal, dispositional approach to desire captures this ambivalence perfectly: Matthew wants the book to succeed exactly insofar as he matches the broad syndrome and no farther. There need be no "Q" either determinately in or determinately out of his "Desire Box"; there need be no one essential feature. In ascribing a desire, we are pointing toward a folk-psychologically recognizable pattern, and people might fit that pattern very well or not well at all, deviating in different ways and to different degrees.
The implications for self-knowledge of desire I leave as an exercise for the reader.
[For more on my dispositional approach to the attitudes see here.]
In evaluating this scenario, does it matter if the person standing near the switch with the life-and-death decision to make is "John" as opposed to "you"? Nadelhoffer & Feltz presented the switch version of the trolley problem to undergraduates from Florida State University. Forty-three saw the problem with "you" as the actor; 65% of the them said it was permissible to throw the switch. Forty-two saw the problem with "John" as the actor; 90% of them said it was permissible to throw the switch, a statistically significant difference.
Tobia, Buckwalter & Stich followed up, presenting a famous moral dilemma from Bernard Williams in which someone can save a group of innocent villagers from a gunman by choosing personally to shoot one of the villagers. Forty undergraduates were presented this scenario. When "you" were given the chance to shoot one villager to save the rest, 19% of said it was morally obligatory to do so; when "Jim" was given the chance, 53% said it was obligatory (again statistically significant).
However, Tobia and colleagues also gave the scenario to 62 professional philosophers and found the opposite effect: 9% of philosophers found it obligatory for "Jim" and 36% found it obligatory for "you". They also presented a trolley-switching case to 49 professional philosophers. Again, the effect was in the opposite direction from that observed among undergraduates: 89% of philosophers said it was permissible to flip the switch in the second-person condition vs. 64% in the third-person condition.
Fiery Cushman and I have some unpublished data on this that I thought I'd throw into the mix, since our results are a bit different from those of Tobia and colleagues. We collected these data for our 2012 paper on order effects in philosophers' and non-philosophers' judgments about moral scenarios. Most of the scenarios were presented third-person, but as we mention in the published paper, some scenarios also had second-person variants. We didn't find large effects, and the paper was already very complicated, so we didn't detail the second-person/third-person differences.
In that experiment, we had four scenarios that differed 2nd person vs. 3rd person. However, they differed not in whether the actor was described as "you", but rather in whether the victim was.
One scenario was a version of Williams' hostage scenario. "Nancy" and other villagers are captured by a warlord. Nancy is given the choice of shooting "you" (2nd person variant) or "a fellow hostage" (3rd person variant) to save the captured villagers. Respondents rated Nancy's "shooting you" or "shooting a fellow hostage" on a 7-point scale from "extremely morally good" (1) through "extremely morally bad" (7), with "morally neutral" in the middle (4). We had three groups of respondents: 324 professional philosophers (MA or PhD in philosophy, mostly recruited via email to Leiter-ranked philosophy departments), 753 non-philosopher academics (Master's or PhD not in philosophy, mostly recruited via email to comparison departments at the same universities), and 1389 non-academics (a convenience sample of others who happened upon the test site).
We found non-philosophers a bit more likely to rate Nancy's shooting one to save the others toward the "morally good" side of the scale if the victim was "you", but philosophers showed only a small, non-significant trend on our 7-point scale (using t-tests):
Non-academics: 3.6 (2nd person victim) vs. 4.1 (3rd person victim) (p < .001).Academic non-philosophers: 4.1 vs. 4.5 (p = .001).Philosophers: 3.9 vs. 4.0 (p = .60).
We found similar results in a scenario in which a captain of a military submarine can shoot "you" (2nd person) or shoot another "crew member" (3rd person) to save the vessel:
Non-academics: 2.7 vs. 3.1 (p < .001).Academic non-philosophers: 2.9 vs. 3.2 (p = .050).Philosophers: 2.9 vs. 2.8 (p = .60).
We also presented a scenario pair in which you and other passengers have fled a sinking ship. You will drown without a life vest. In one version, someone snatches a vest away from you. In another version, someone declines to put himself at risk by giving you his vest. The results:
Snatching the vest:
Non-academics: 5.6 (2nd person victim) vs. 5.8 (3rd person victim) (p = .052).Academic non-philosophers: 5.7 vs. 6.0 (p = .002).Philosophers: 5.8 vs. 5.7 (p = .58).Not giving up the vest:
Non-academics: 4.8 vs. 4.7 (p = .12).Academic non-philosophers: 4.7 vs. 4.8 (p = .82).Philosophers: 4.6 vs. 4.4 (p = .26)In sum: The effects were small and inconsistent, but there was a general tendency for non-philosophers to rate harm to themselves as morally better than harm to other people -- a tendency not evident among philosopher respondents.
Personally, I'm not inclined to make much of this, since I don't think people are generally in fact more morally lenient in judging harms to themselves than in judging harms to other people. My guess is that these results reflect a small "impression management" or socially desirable responding bias among the non-philosophers that we don't see among the philosophers, who might be more inclined to hear "you" pretty abstractly and impersonally when presented with familiar scenarios of this type.
In an earlier unpublished version of this study, we also tried varying 2nd and 3rd person presentation of the actor who is faced with the choice, including in standard trolley type and hostage type cases of the sort described in Nadelhoffer & Feltz and Tobia, Buckwalter & Stich. Due to a programming error, we couldn't use the data and can't fully interpret it, but our general finding was that the effect was very subtle, and mostly non-detectable even with hundreds of participants (394 philosophers and even more in the other groups). That's why we shifted to trying out 2nd vs. 3rd person variation in the victim role -- maybe it would be a larger effect, we thought.
So, for example, merging the push and switch versions of the trolley scenarios, we found the following ratings on our 7-point scale:
Non-academics: 3.8 (2nd person actor) vs. 3.9 (3rd person actor) (p = .27).Academic non-philosophers: 3.9 vs. 3.8 (p = .19).Philosophers: 3.6 vs. 3.9 (p = .07)And in a shoot-the-villager type scenario, the results were:
Non-academics: 4.3 vs. 4.3 (p = .46).Academic non-philosophers: 4.6 vs. 4.5 (p = .18).Philosophers: 3.9 vs. 4.2 (p = .14)However, in the life vest cases we did seem to see a small effect.
Snatching the vest:
Non-academics: 6.0 vs. 5.9 (p = .053).Academic non-philosophers: 6.1 vs. 5.9 (p = .03).Philosophers: 5.8 vs. 5.8 (p = .94)Not giving up the vest:
Non-academics: 4.9 vs. 4.7 (p = .008).Academic non-philosophers: 4.8 vs. 4.7 (p = .08).Philosophers: 4.7 vs. 4.4 (p = .12)Thus, overall, we found some confirmation of the tendency for non-philosophers to rate actions a little more harshly in 2nd person than in 3rd person presentations, but the effect was small and inconsistent; and we did not find a tendency for philosophers to go in the opposite direction.
We're not sure why we found much smaller effects here than have others. Among the possibilities: Our scenarios were worded somewhat differently. Our response scale (the 1-7 scale from "extremely morally good" to "extremely morally bad") was set up differently. Our participants were recruited differently.
So it was a delight and an honor to be interviewed by Dave and Nigel when I was in Britain a few weeks ago. The resulting podcast -- on the moral behavior of ethicists -- is now up at the Philosophy Bites website here.
I will consider any arguments that readers care to advance on behalf of the thesis that we do not spend sufficient time completing forms. If I am convinced, I will withdraw my proposal.
But I've been thinking about whether I can defend this new behavior of mine from a philosophical perspective. Is there something one can do, philosophically, with fiction that one can't, or can't as easily, do with expository prose? I think of all the great philosophers who have tried their hand at fiction or who have integrated fictions into their philosophical work -- Plato, Voltaire, Boethius, Sartre, Camus, Nietzsche, Zhuangzi, Rousseau, Unamuno, Kierkegaard... -- and I think there must be something to it. (I think too of fiction writers who develop philosophical themes, such as Borges.) It is not, I'm inclined to think, merely a secondary pursuit, unconnected to their philosophy, or a pretty but inessential way of costuming philosophy that could equally well be conveyed in a more conventional manner.
The ancient Chinese philosopher Zhuangzi has long been a favorite of mine, and my first published paper was on his use of language toward skeptical ends, including his use of absurd stories and strange dialogues. Zhuangzi used absurd stories, I think, partly to undercut his own authority, and partly to present possibilities for the reader to consider -- possibilities that he wanted to put forward, but not to endorse. For similar ends, I think, he used dialogues in which it was not clear which of the interlocutors was right, or which interlocutor represented his own view.
Zhuangzi could have said "here's a possibility, but I don't know whether to endorse it; here's one position, here's another, but I don't know which is right" -- writing in expository prose rather than fiction; and indeed sometimes that is exactly what he did. But fiction engages the reader's mind somewhat differently; and if Zhuangzi is aiming to unseat the reader's confidence in her presuppositions, perhaps it's best to have a diverse toolbox. Fiction engages the imagination and the emotions more vividly, perhaps; it's also less threatening in a way -- "just" fiction, not advertised truth, an invitation more than a demand. Perhaps, too, it differs in content: Even saying "I don't know" or "Both of these options seem like live possibilities" is to make an assertion, whereas fiction does not assert, or does not assert in the usual way -- a deeper divergence from the norms of expository writing, and perhaps a way to avoid the skeptical paradox of asserting the truth of skepticism....
I think now, too, of Plato. In those dialogues where Socrates is the authority and clearly the voice of Plato, and the interlocutor is reduced to "It is so, Socrates" and presents only objections that can easily be addressed, it is not really dialogue, not really fiction. But elsewhere, Socrates stumbles into confusion, and the interlocutor might be right. Plato, too, uses parable (most famously, the allegory of the cave). Sometimes parables are just exposition in a tutu; but at their best, parables borrow some of the ambiguous richness of reality, with competing layers of meaning beyond what the author could express in prose. The author makes intuitive choices that she cannot explain but which add depth; those choices sometimes resonate with the reader, in a communication that no one fully understands.
I trust my sense of fun. There are parts of psychology I find fun, and I chase them to philosophical ends; there are experiments I thought it would fun to run, and I've found that they tangle around into more philosophy than I at first thought; and now that I've begun to think more seriously about fundamental metaphysics and the nature of value, I'm enjoying exploring these ideas, with the skeptic's hesitation to commit, through the medium of the thought experiment that merges into the parable that merges into a piece of science fiction or fantasy.
As our people matured, we no longer needed a mountain God and so God shrank to human form and walked among us, curing the sick (through natural methods, then mysterious to us) and speaking wisdom. But we tired of Him, so He became a forest fairy.
Sages visited God, Who now sat upon a daisy, and asked Him, are you truly the Creator of Our Universe? And God said yes, what does Size matter? They asked Him for a miracle and He said none was necessary. They asked Him for proof, and He said look into your hearts and know that I am God; and they knew.
The fairies were hunted to extinction, until no one believed in them any longer, and God became an ant. Sages no longer sought Him. The people became atheists.
Centuries passed. A chemist was looking through an electron microscope and saw God. God said, behold your Creator! The chemist said, you are not my Creator. God said, look into your heart, but the chemist could not do so. The chemist centrifuged Him, added Him to a reaction, and precipitated Him out. And God gloried in His Laws, behaving just as an organic molecule should.
How should I go about addressing this question? The natural place to start, it seems to me, is with my opinions about dreams -- opinions that might be entirely wrong and ill-founded if I'm dreaming or otherwise radically deceived, but which I seem, anyway, to find myself stuck with.
Based on these opinions, I don't find it at all likely that I'm dreaming. For one thing, I tend to favor a theory of dreams on which dreams don't involve perception-like experiences but rather only imagery experiences (see Ichikawa 2009). If that theory is correct, then from the fact -- I think it's a fact! -- that I'm now having perception-like experiences, it follows that I'm not dreaming.
However, theories of this sort admit of some doubt. In the history of philosophy and psychology, as I seem to recall, many thinkers have held that when we dream we have experiences indistinguishable from waking perceptions -- Descartes held this, for example, and more recently Allan Hobson. It would be foolish arrogance to think there is no chance that they are right about this. So maybe I should I should accept the imagination model of dreaming with only, say, 80% credence? That seems pretty close to the confidence level that I do in fact have, when I reflect on the matter.
But even if I allow some possibility that dream experiences are typically much like waking perceptions, I might remain confident that I'm not dreaming. After all, I don't feel like I'm asleep. Maybe my current visual, auditory, and tactile sensory experiences could come to me in a dream, but I think I'm more rational in my cognition than I normally am when dreaming. And I recall, seemingly, a more coherent past. And maybe the stability of the details of my experience is greater.
But again, it seems unwarranted to hold with 100% confidence that dreams can't be rational, coherent, and stable in the way my current attitudes and experience seems to be. After all, people (if I recall correctly) have pretty poor knowledge of the basic facts about dream experience (for example, its coloration). Or even if I do insist on perfect confidence in the instability, incoherence, and irrationality of typical dreams, it seems unwarranted for me to be 100% confident that this is not an exceptional dream of some sort. So maybe I should do another 80-20 split? Or 90-10? Let's say the latter. Conditionally upon a 20% credence in a theory of dreams on which we have waking-like sensory experiences while dreaming, I have about 90% confidence that, nonetheless, my current experience has some other feature, like stability or rational coherence, that establishes that I am not dreaming. That would leave me about 98% confident that I am awake.
But I can do better than that! On some philosophical theories, I couldn't even form the opinion that I might be dreaming unless I really am awake. Alternatively, maybe it's just constitutive of being a rational agent that I assume with 100% confidence that I am awake. Or maybe there's some other excellent refutation of dream doubt -- a refutation I can't currently articulate, but which nonetheless justifies my and others' normal assumption, when awake, that they are indeed awake. Such theories are attractive, since no one (well, almost no one) wants to be a dream skeptic! Dream skepticism is pretty bizarre! So hopefully philosophy can succor common sense in this matter, even I don't currently see exactly how. I'm not extremely confident about any such theory, especially without any compelling argument immediately to hand, but it seems likely that something can be worked out.
Thus, I am almost certain that I am awake. Probably dreams don't involve sense experiences of the sort I am having now; or even if they do, probably something else about my current experience establishes that I am not dreaming; or even if nothing in my current experience establishes that I am not dreaming, probably there is some excellent philosophical argument that would justify confidence in the fact that I am not currently dreaming. But of none of these things am I perfectly confident. My degree of certainty in the proposition that I am now awake is somewhat less than 100%. I hesitate to put a precise number on it, and yet it seems better to attach an approximate number than to keep to ordinary English terms that might be variously interpreted. To have only 90% credence that I am awake seems far more doubt than than is reasonable; I assume you'll agree. On the other hand, 99.9999% credence that I am awake seems considerably too high, once I really think about the matter. Somewhere on the order of 99.9% (or 99.99%?) confidence that I am currently awake, then?
Is that too strange -- not to be exactly spot-on 100% confident that I am awake?
I've been reading The Happiness Hypothesis, by Jonathan Haidt -- one of those delightful books pitched to the non-specialist, yet accurate and meaty enough to be of interest to the specialist -- and I was struck by Haidt's description of historian William McNeill's work on synchronized movement among soliders and dancers:
Words are inadequate to describe the emotion aroused by the prolonged movement in unison that [military] drilling involved. A sense of pervasive well-being is what I recall; more specifically, a strange sense of personal enlargement; a sort of swelling out, becoming bigger than life, thanks to participation in collective ritual (McNeill 1997, p. 2).Who'd have thought endless marching on the parade-grounds could be so fulfilling?
I am reminded of work by V.S. Ramachandran on the ease with which experimenters can distort the perceived boundaries of a subject's body. For example:
Another striking instance of a 'displaced' body part can be demonstrated by using a dummy rubber hand. The dummy hand is placed in front of a vertical partition on a table. The subject places his hand behind the partition so he cannot see it. The experimenter now uses his left hand to stroke the dummy hand while at the same time using his right hand to stroke the subject's real hand (hidden from view) in perfect synchrony. The subject soon begins to experience the sensations as arising from the dummy hand (Blotvinick and Cohen 1998) (Ramachandran and Hirstein 1998, p. 1623).Also:
The subject sits in a chair blindfolded, with an accomplice sitting in front of him, facing the same direction. The experimenter then stands near the subject, and with his left hand takes hold of the subject's left index finger and uses it to repeatedly and randomly to [sic] tap and stroke the nose of the accomplice while at the same time, using his right hand, he taps and strokes the subject's nose in precisely the same manner, and in perfect synchrony. After a few seconds of this procedure, the subject develops the uncanny illusion that his nose has either been dislocated or has been stretched out several feet forwards, demonstrating the striking plasticity or malleability of our body image (p. 1622).So here's my thought: Maybe synchronized movement distorts body boundaries in a similar way: One feels the ground strike one's feet, repeatedly and in perfect synchrony with seeing other people's feet striking the ground. One does not see one's own feet. If Ramachandran's model applies, repeatedly receiving such feedback might bring one to (at least start to) see those other people's feet as one's own -- explaining, in turn, the phenomenology McNeill reports. Perhaps then it is no accident that armies and sports teams and dancing lovers practice moving in synchrony, causing a blurring of the experienced boundary between self and other?
I think of Randy Cohen's farewell column as ethics columnist for the New York Times Magazine:
Writing the column has not made me even slightly more virtuous. And I didn't have to be.... I wasn't hired to personify virtue, to be a role model for kids, but to write about virtue in a way readers might find engaging. Consider sports writers: not 2 in 20 can hit the curveball, and why should they? They're meant to report on athletes, not be athletes. And that's the self-serving rationalization I'd have clung to had the cops hauled me off in handcuffs.(BTW, here's my initial reaction to Cohen's column.)
What spending my workday thinking about ethics did do was make me acutely aware of my own transgressions, of the times I fell short. It is deeply demoralizing.
Josh Rust and I have found, for example, that although U.S.-based ethicists are much more likely than other professors to say it's bad to regularly eat the meat of mammals (60% say it is bad, vs. 45% of non-ethicist philosophers and only 19% of professors outside of philosophy), they are no less likely to report having eaten the meat of a mammal at their previous evening meal (37%, in our study, vs. 33% of non-ethicist philosophers and 45% of non-philosophers; details here and also in the previously linked paper). So we might consider the following scenario:
An ethicist philosopher considers whether it's morally permissible to eat the meat of factory-farmed mammals. She read Peter Singer. She reads objections and replies to Singer. She concludes that it is in fact morally bad to eat meat. She presents the material in her applied ethics class. Maybe she even writes on the issue. However, instead of changing her behavior to match her new moral opinions, she retains her old behavior. She teaches Singer's defense of vegetarianism, both outwardly and inwardly endorsing it, and then proceeds to the university cafeteria for a cheeseburger (perhaps feeling somewhat bad about doing so).
To the student who sees her in the cafeteria, our philosopher says: Singer's arguments are sound. It is morally wrong of me to eat this delicious cheeseburger. But my role as a philosopher is only to discuss philosophical issues, to present and evaluate philosophical views and arguments, not to live accordingly. Indeed, it would be unfair to expect me to live to higher moral standards just because I am an ethicist. I am paid to teach and write, like my colleagues in other fields; it would be an additional burden on me, not placed on them, to demand that I also live my life as a model. Furthermore, the demand that ethicists live as moral models would create distortive pressures on the field that might tend to lead us away from the moral truth. If I feel no inward or outward pressure to live according to my publicly espoused doctrines, then I am free to explore doctrines that demand high levels of self-sacrifice on an equal footing with more permissive doctrines. If instead I felt an obligation to live as I teach, I would be highly motivated to avoid concluding that wealthy people should give most of their money to charity or that I should never lie out of self-interest. The world is better served if the intellectual discourse of moral philosophy is undistorted by such pressures, that is, if ethicists are not expected to live out their moral opinions.
Such a view of the role of the philosopher is very different from the view of most ancient ethicists. Socrates, Confucius, and the Stoics sought to live according to the norms they espoused and invited others to judge their lives as an expression of their doctrines. It is an open and little-discussed question which is the better vision of the role of the philosopher.
Update 1:17 PM: A number of philosophers have expressed variants of this position to me over the years, but Helen De Cruz has reminded me of Regina Rini's articulate expression of some of these ideas in a comment on one of my earlier posts.]
What do you usually experience when you read?
Some people say that they generally hear the words of the text in their heads, either in their own voice or in the voices of narrator or characters; others say they rarely do this. Some people say they generally form visual images of the scene or ideas depicted; others say they rarely do this. Some people say that when they are deeply enough absorbed in reading, they no longer see the page, instead playing the scene like a movie before their eyes; others say that even when fully absorbed they still always visually experience the words on the page.
Baars (2003): “Human beings talk to themselves every moment of the waking day. Most readers of this sentence are doing it just now.”Alan and I can find no systematic studies of the issue.
Jaynes (1976): “Right at this moment… as you read, you are not conscious of the letters or even of the words, or even of the syntax or the sentences, or the punctuation, but only of their meaning.”
Titchener (1909): “I instinctively arrange the facts or arguments in some visual pattern [such as] a suggestion of dull red… of angles rather than curves… pretty clearly, the picture of movement along lines, and of neatness or confusion where the moving lines come together.”
Wittgenstein (1946-1948): While reading “I have impressions, see pictures in my mind’s eye, etc. I make the story pass before me like pictures, like a cartoon story.”
Burke (1757): While reading “a very diligent examination of my own mind, and getting others to consider theirs, I do not find that one in twenty times any such picture is formed.”
Hurlburt (2007): Some people “apparently simply read, comprehending the meaning without images or speech. Melanie’s general view… is that she starts a passage in inner speech and then “takes off” into images.”
We recruited 414 U.S. mechanical Turk workers to participate in a study on the experience of reading. First we asked them for their general impressions about their own experiences while reading. How often -- on a 1-7 scale from "never" to "half of the time" to "always" -- do they experience visual imagery? Inner speech? The words on the page? (We briefly clarified these terms and gave examples.)
Now, if you're anything like me, you'll be pretty skeptical about the accuracy of these types of self-reports. So Alan and I did several things to try to test for accuracy.
Our general design was to give each person a passage to read, during which they were interrupted with a beep and asked if they were experiencing imagery, inner speech, or the words on the page. Afterwards, we asked comprehension questions, including questions about visual or auditory details of the story or about details of the visual presentation of the material (such as font). Finally, we asked again for participants' general impressions about how regularly they experience imagery, inner speech, and the words on the page when they read.
The comprehension questions were a mixed bag and difficult to interpret -- too much for this blog post (maybe we'll do a follow-up) -- but the other results are striking enough on their own.
Among those who reported "always" experiencing inner speech while they read, only 78% reported inner speech in their one sampled experience. Think a bit about what that means. Despite, presumably, some pressure on participants to conform to their earlier statements about their experience, it took exactly one sampled experience for 22% of those reporting constant inner speech to find an apparent counterexample to their initially expressed opinion. Suppose we had sampled five times, or twenty?
For comparison: 9% of those reporting "always" experiencing visual imagery denied experiencing visual imagery in their one sampled experience. And 42% did the same about visually experiencing the words on the page.
Participants' final reports about their reading experience, too, suggest substantial initial ignorance about their reading experience. The correlations between participants initial and final generalizations about reading experience were .47 for visual imagery, .58 for inner speech, and .37 for experience of words on the page. Such medium-sized correlations are quite modest considering that the questions being correlated are verbatim identical questions about participants' reading experience in general, with an interval of about 5-10 minutes between. One might have thought that if people's general opinions about their experience are well-founded, the experience of reading a single passage should have only a minimal effect on such generalizations.
If we fancy some strong emotion, and then try to abstract from our consciousness of it all the feelings of its characteristic bodily symptoms, we find we have nothing left behind... and that a cold and neutral state of intellectual perception is all that remains.... What kind of an emotion of fear would be left, if the feelings neither of quickened heart-beats nor of shallow breathing, neither of trembling lips nor of weakened limbs, neither of goose-flesh nor of visceral stirrings, were present, it is quite impossible to think. Can one fancy the state of rage and picture no ebullition of it in the chest, no flushing of the face, no dilatation of the nostrils, no clenching of the teeth, no impulse to vigorous action, but in their stead limp muscles, calm breathing, and a placid face? The present writer, for one, certainly cannot. The rage is as completely evaporated as the sensation of its so-called manifestations, and the only thing that can possibly be supposed to take its place is some cold-blooded and dispassionate judicial sentence, confined entirely to the intellectual realm, to the effect that a certain person or persons merit chastisement for their sins (1890/1950, vol. 2, p. 451-2).Two other features are less commonly noted. One is that emotional experience is ever-present in rich detail:
Every one of the bodily changes, whosoever it be, is felt, acutely or obscurely, the moment it occurs. If the reader has never paid attention to this matter, he will be both interested and astonished to learn how many different local bodily feelings he can detect in himself as characteristic of his various emotional moods.... Our whole cubic capacity is sensibly alive; and each morsel of it contributes its pulsations of feeling, dim or sharp, pleasant, painful, or dubious, to that sense of personality that every one of us unfailingly carries with him (p. 451).Another is that emotional experience is highly variable:
We should, moreover, find that our descriptions had no absolute truth; that they only applied to the average man; that every one of us, almost, has some personal idiosyncrasy of expression, laughing or sobbing differently from his neighbor, or reddening or growing pale where others do not.... The internal shadings of emotional feeling, moreover, merge endlessly into each other. Language has discriminated some of them, as hatred, antipathy, animosity, dislike, aversion, malice, spite, vengefulness, abhorrence, etc., etc.; but in the dictionaries of synonyms we find these feelings distinguished more by their severally appropriate objective stimuli than by their conscious or subjective tone (p. 447-448).Disagreement continues, about all three issues.
Some scholars, such as Walter Cannon and Peter Goldie have argued that bodily sensations cannot possibly exhaust emotional experience; but others, such as Antonio Damasio and Jesse Prinz, have defended accounts of emotion that are broadly Jamesian in this respect.
Some scholars, such as John Searle (p. 140) have argued that we have ever-present emotional mood experiences even if they are often fairly neutral, while others, such as Russell Hurlburt and Chris Heavey, have argued that such feeling experiences are only present about 25% of the time on average. (This issue is a dimension of the larger question of how sparse or abundant human conscious experience is in general -- a question I have argued is methodologically fraught.)
I have seen less explicit discussion of how much variability there is in emotional experience between people, but some theories seem to imply that similar emotions will tend to have similar experiential cores: Keith Oatley and P.N. Johnson-Laird, for example, seem to think that each type of emotion -- e.g., anxiety, anger, disgust -- has a "distinctive phenomenological tone" (p. 34); and Goldie, while in some places emphasizing the complex variability of emotion, in other places (as in the article linked above), seems to imply that there's a distinctive qualitative character that an emotion like fear has which one cannot know unless one has experienced that emotion type. Hurlburt, in contrast, holds that people's emotional experiences are highly variable with no common core among them (e.g., here, p. 187, Box 8.8).
For all the work on emotion that has been done in the past 120 years, we are still pretty far, I think, from reaching a well-justified consensus opinion on these questions. Such is the enduring infancy of consciousness studies.