Consider dream skepticism. Suppose I have, or think I have, empirical grounds for believing that dreams and waking life are difficult to tell apart. On those grounds, I think that my experience now, which I'd taken to be waking experience, might actually be dream experience. But if I might now be dreaming, then my current opinions (or seeming opinions) about the past all become suspect. I no longer have good grounds for thinking that dreams and waking life are difficult to tell apart. Boom!
(That was supposed to be the sound of a skeptical argument imploding.)
Stephen Maitzen has recently been advancing an argument of roughly that sort: that the skeptic "must attribute to us justified empirical beliefs of the very kind the argument must deny us" (p. 30). Similarly, G.E. Moore, in "Certainty", argues that dream skeptics assume that they know that dreams have occurred, and that if one is dreaming one does not know that dreams have occurred. (Boom.)
One problem with this self-defeat objection to dream skepticism is that it assumes that the skeptic is committed to saying she is justified in thinking (or knows) that this might well be a dream. The most radical skeptics (e.g., Sextus), might not be committed to this.
A more moderate skeptic (like my 1% skeptic) can't escape the argument that way, but another way is available. And that is to concede that whatever degree of credence she was initially inclined to assign to the possibility that she is dreaming, on the basis of her assumed empirical evidence and memories of the past, she probably should tweak that credence somewhat to take into account the fact that she can no longer be highly confident about the provenance of that seeming empirical evidence. But unless she somehow discovers new grounds for thinking that it's impossible or hugely unlikely that she is dreaming, this is only partial undercutting -- not grounds for 100% confidence that she is not dreaming. She can still maintain reasonable doubt: Previously she was very confident that she knew that dreams and waking life were hard to tell apart; now she could see going either way on that question.
Consider this case as an analogy. I have a very vivid and realistic seeming-memory of having been told ten minutes ago, by a powerful demon, that in five minutes this demon would flip a coin. If it comes up heads, she will give me a 50% mix of true and false memories about the half hour before and after the coin flip, including about that very conversation; if tails, she won't tamper with my memory. Then she'll walk away and leave me in my office.
Should I trust my seeming-memories of the past half hour, including of that conversation? If I trust those memories, that gives me reason not to trust them. If I don't trust those memories, well that seems hardly less skeptical. Either way, I'm left with substantial doubt. The doubt undercuts its own grounds to some extent, yes, but it doesn't seem epistemically justified to react to that self-undercutting by purging all doubt and resting in perfect confidence that my memories of that conversation are entirely veridical.
This is the heart of the empirical skeptic's dilemma: Either I confidently take my experience at face value or I don't. If I don't confidently take my experience at face value, I am already a skeptic. If I do confidently take my experience at face value, then I discover empirical reasons not to take it confidently at face value after all. Those reasons partly undercut themselves, but that partial undercutting does not then justify shifting back to high confidence as though there were no such grounds for doubt.
Our empirical grounds are very limited: We have only seen conscious systems as they exist on Earth. To draw, on these grounds, universal conclusions about how any conscious system must be structured would be a reckless leap, if our theories are really supposed to be driven by empirical tests on conscious animals, which could have come out either way. Who knows how those crucial theory-driving experiments would have come out on very differently constructed beings from Andromeda Galaxy?
A truly universal theory of consciousness seems more likely to succeed if it draws broadly from a range of hypothetical cases, abstracting away from empirical details of implementation on Earth. So we must sit in our armchairs. However, armchair reflection about the consciousness of hypothetical beings has two huge shortcomings:
- Experts in the field reach very different conclusions when asked to reflect on what sorts of hypothetical beings would be conscious (all the way from panpsychism to views that require highly sophisticated cognitive abilities).
- Our judgments about such cases must be grounded in some sort of prior knowledge, such as our experience of beings here on Earth and our developmentally and socially and evolutionarily favored beliefs. And there seems little reason to trust such judgments outside the run of normal cases, for example, about the consciousness or not of large group entities under various conditions.
If you are moved by these concerns, you might think that the appropriate response is to restrict our theory to consciousness as it appears on Earth. But even just thinking about consciousness on Earth drops us into a huge methodological dilemma. If we treat introspective reportability as something close to a necessary condition for consciousness, then we end up with a very sparse view of the distribution of consciousness on Earth. And maybe that's right! But it also seems reasonable to think that consciousness might be possible without introspective reportability, e.g., in dogs and babies. And then it becomes extremely unclear how we determine whether it is present without begging big theoretical questions. How could we possibly determine whether an ant is conscious without begging the question against people with very different views than our own?
Could we forget about non-human animals and babies and restrict our (increasingly less general) theory of consciousness just to adult humans? Even here, I incline toward pessimism, at least for the medium-term future.
One reason is this: I see no near-term way to resolve the question of whether consciousness abundantly outruns attention. I think I can imagine two very different possibilities here. One possibility is that I have constant tactile experience of my feet in my shoes, constant auditory experience of the hum of the refrigerator, etc., but when I'm not attending to such matters, that experience drops out of memory so quickly and is so lightly processed that it is unreportable. Another possibility is that I usually have no tactile experience whatsoever of my feet in my shoes or auditory experience of the hum of the fridge unless these things capture my attention for some reason. These possibilities seem substantively distinct, and it's easy to see how a proponent of one can create a methodological error theory to explain away the judgments of a proponent of the other.
Now maybe there's a way around these problems. Scientists have often found ingenious ways to embarrass earlier naysayers! But still, there's such a huge spread between the best neuroscientific approaches (e.g., Tononi and Dehaene) and such a huge spread between the best philosophical approaches (e.g., Chalmers and Dennett), that it's hard for me to envision a well-justified consensus emerging in my philosophical lifetime.
There once was a fictional character. And her name was May. May was four years old; or, at least that's what the story said. In fact, she had just been written quite recently "and so in that sense, I am a newborn fictional character" she liked to say. Especially when she was looking for a good excuse to curl up on the rug like a teeny tiny baby, suck her thumb, and bat at things in a homemade play gym.
Well May was a lovely little fictional character, bouncy, fun, very clever, and with an uncanny ability to eat olives. Put as many as you like in front of her, and they'd always be gone by the next page.
But still May was not happy, for although she appreciated the olives, she was lonely. "Why don't I have a mommy or daddy to take care of me?" she asked.
"But you do have a mother. I created you from words and pictures" her author said.
"That's NOT what I mean" the little girl sulked, and she slumped down right in the corner of the page, folding one edge over to hide herself.
"Oh alright then," the author said. And she made her a mother. A mother who loved her more than anything in the world, who taught her to paint and to laugh at herself, who sat on the floor for hours making zoos and block houses and earthquakes to destroy them. And she made her a father. A father with constant love and gentle patience, who taught her to bake banana bread and to play piano and to name every bird in the garden. And they were happy together.
They came to live in a book. A real hardcover book, with full color pictures and shiny pages. And the book came to be on the shelves of a little girl—a four year old girl, as it happens—by the name of Natalie. The various dragons and bears who lived in tatty second hand paperbacks on lower shelves really quite envied them.
Till one day, just before bedtime, Natalie spotted the book, sticking out slightly between a board book about ducklings and something involving a circus. What's this book? She asked. She had never seen it before. "I want to read it now! Can't we read it pleeeaaase?" She asked. "Well, it's a bit late," her mommy said, "but I guess we could read just this one." And they all plumped down on Natalie's fluffy red comforter, and her daddy began to read.
As they closed the book for the night, Natalie's mommy said, "well I'm glad little May got some parents and isn't lonely anymore." "But mooommmy," Natalie protested, "she's not REAL!" Oh yeah, admitted mommy, closing the book gently and turning out the light.
"I'm glad I'm real and not just in a book." said Natalie quietly as she curled up with her blanket.
"So am I, sweetheart," her mommy agreed as she kissed her soft cheek goodnight.
Well once the book was closed little May began to cry. "What does she MEAN I'm not real?" asked May who, like most children, had forgotten those muddled early days after she was first made, those days when she was lonely. Well, her mother explained, we are just characters in a book. We do what our author writes, there’s no more to us than she's given us, and we stay in the world of these pages.
But I want to get out of here! May protested. I want to be really REAL. I want to have toes (for these had never been seen in the pages). I want to know what happened when I was just two (for this had never been spoken of). And I want to go wherever I want to go, not where some author puts me! She railed. And she wept and she struggled and she stewed. Her mother cried a bit too, to see her daughter realizing these sad truths, but her daddy just held her hand.
You know, he said, since were not real, we'll never get sick (see: no sickness is ever mentioned). We'll never bump too hard off a slide. Or get bitten by mosquitoes.
And will no one ever steal the olives out of my lunch box? May wanted to know.
Nope, no one will ever steal the olives out of your lunchbox. Or your vanilla cookies either. And best of all, none of us will ever die—we can stay here together for always, loving each other in this book.
I'm glad we're not real, May decided. And she curled up in a corner of the page, sucking her thumb quietly, and went to sleep.
Extract from "I'm glad I'm not real" by Amie L. Thomasson, from The Philosophy Shop: Ideas, activities and questions to get people, young and old, thinking philosophically. Edited by Peter Worley (c)Peter Worley 2012. ISBN 9781781350492
If certain cosmological theories are true, then almost all conscious systems are freak observers of this sort. Here's one such theory: There is exactly one universe which began with a unique Bang, which contains a finite number of ordinary non-freak observers, and which will eventually become thin chaos, enduring infinitely thereafter in a disorganized state. In any spacetime region there is a miniscule but finite chance of the spontaneous freak formation of any finite organized system, with smaller and less organized systems vastly more likely than larger and more organized systems. Given infinite time, the number of spontaneously formed freak observers will eventually vastly outnumber the normal observers. Whatever specific experiences and evidence I take myself now to have, according to this theory, to any finite degree of precision, there will be an infinite number of randomly generated Eric Schwitzgebel clones who have the same experiences and apparent evidence.
Can I prove that I am not a freak observer by counting "1, 2, 3, still here"? Seemingly no, for two reasons: (1.) By the time I reach "still here" I am relying on my memory of the "1, 2, 3", and the theory says that there will be an infinite number of freak observers with exactly that false memory. (2.) Even if assume knowledge of my continued existence for three seconds, there will be an infinite number of somewhat larger freak observers who congealed simultaneously with a large enough hunk of environment to exist for three seconds, doing that apparent count. If I am such a one, I will very likely perish soon, but it is not guaranteed that I will perish, and if I don't perish and thus conclude that I am not a freak I have ignored the overwhelming base rate of freaks to normal observers.
Suppose that given the physical evidence such a cosmology seems plausible, or some other cosmology in which freak observers vastly outnumber normal observers. Should I conclude I am probably a freak observer? It would be a strange conclusion to draw!
One interesting argument against this conclusion is the cognitive instability argument (Carroll 2010; Davenport & Olum 2010; Crawford 2013): Suppose that my grounds for believing that I am a freak observer are Physical Theory X, which I accept only conditionally upon believing that I have good empirical evidence for Physical Theory X. If I am a freak observer, then, contrary to the initial assumption, I do not have good empirical evidence for Physical Theory X. I have not, for example, despite my contrary impression, actually read any articles about X. If I seem to have good empirical evidence for Physical Theory X, I know already that that evidence is almost certainly misleading or wrongly interpreted -- either I do have the properly-caused body of evidence that I think I have, that is, I am not a freak, and that evidence is misleadingly pointing me to the wrong conclusion about my situation; or I am a freak and I don't have such a body of properly-caused evidence at all.
For this reason, I think it would be irrational to accept a cosmological theory that implies that almost all observers are freak observers and then conclude that therefore I am also a freak observer.
But a lower-confidence conclusion seems to be more cognitively stable. Suppose our best cosmological theory implies that 1% of observers are freaks. I might then accept that there is a non-trivial chance that I am one of the freaks. After all, my best understanding of the universe implies that there are such freaks, and I see no compelling reason to suppose that I couldn't be one of them.
Alternatively, maybe my best evidence should leave me undecided among lots of cosmologies, in some of which I'm a freak and in others of which I'm not. The possibility that I'm a freak undercuts my confidence in the evidence I seem to have for any specific cosmology, but that only adds to my indecision among the possibilities; it doesn't seem to compel elimination of the possibility that I am a freak.
Here's another way to think about it: As I sit here in my office, or seem to, and think about the scope of the cosmos, I find myself inclined to ascribe a non-trivial credence to some sort of very large or infinite cosmology, and also a non-trivial credence to the hypothesis that given enough time freak observers will spontaneously form, and also a non-trivial credence to the possibility that the freaks aren't vastly outnumbered by the normal observers. If I accept this conjunction of views, then it seems to me that I should also assign a bit of credence to the possibility that I am one of the freaks. To do otherwise would seem to commit me to near certainty on some proposition, such as about the relative nucleation rates of freaks vs. environments containing normal observers, that I wouldn't normally think of as something I know with near certainty.
Or maybe I should just take it as an absolutely certain "framework" assumption that I do have the kind of past I think I have, regardless of how many Eric-Schwitzgebelesque freaks the cosmos may contain? I can see how that might be a reasonable stance. But that approach has a dogmatic air that I find foreign.
If I allow that I'm not absolutely 100.0000000000000000000000000000% certain that I'm not a spontaneously formed freak observer, what sort of credence should I assign to the possibility that I am a freak? One in million? One in ten trillion? One in 10^100? I would like to go low! But I'm not sure that it's reasonable for me to go so low, once the possibility occurs to me and I start to consider my reasons pro and con. I'm inclined to think it is vastly less likely that I am a freak observer than that this ticket will win the one-in-ten-million Lotto jackpot -- but given the dubiety of cosmological theories and my inability to really assess them, should I perhaps be considerably less confident than that about my non-freakish position in the cosmos?
What happens, however, if one accepts, as I do, that knowledge that P does not imply belief that P? Can the belief norm be violated as long as the knowledge norm is satisfied? Bracketing, if we can, pragmatic and contextualist concerns (which I normally take quite seriously), is it acceptable to assert something one knows but does not believe?
I'm inclined to think it is.
Consider my favorite case of knowledge without belief (or at least without full, determinate belief), the prejudiced professor case:
Juliet, let’s suppose, is a philosophy professor who racially identifies as white. She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the ﬁrst day of each term, she can’t help but think that some students look brighter than others – and to her, the black students never look bright. When a black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a white or Asian student to do so, even though her black students make insightful comments and submit excellent essays at the same rate as do the others. And so on.I am inclined to say, in such a case (assuming the details are fleshed out in plausible ways) that Juliet knows that all the races are equally intelligent, but her belief state is muddy and in-betweenish; it's not determinately correct to say that she believes it. Such in-between cases of belief require a more nuanced treatment than is permitted by straightforward ascription or denial of the belief. (I often analogize here to in-betweenish cases of having a personality trait like courage or extraversion.) Juliet determinately knows but does not determinately believe.
You might not accept this description of the case. My view about it is distinctly in the philosophical minority. However, suppose you grant my description. Is Juliet justified in asserting that all the races are equally intelligent, despite her not determinately believing that to be the case?
I'm inclined to think so. She has the evidence, she's taken her stand, she does not err when she asserts the proposition in debate, even if she cannot bring herself quite to live in a way consistent with determinately believing it to be so. However she is inclined spontaneously to respond to the world, the egalitarian proposition reflects her best, most informed judgment. Assertability in this sense tracks knowledge better than it tracks belief. She can properly assert despite not determinately believing.
*************** Objection: "Moore's paradox" is the strangeness of saying things like "It's raining but I don't believe that it's raining". One might object to the above that I now seem to be committed to the assertability of Moore-paradoxical sentences like
(1.) All the races are intellectually equal but I don't (determinately) believe that they are.Reply: I grant that (1) is not properly assertable in most contexts. Rather, what is properly assertable on my view is something like:
(2.) All the races are intellectually equal, but I accept Schwitzgebel's dispositional approach to belief and it is true in the terms of that theory that I do not determinately believe that all the races are intellectually equal.The non-assertability of (1) flows from the fact that my dispositional approach to belief is not the standard conception of belief. If my view about belief were to become the standard view, then (1) would become assertable.
So it's very cool that they've chosen a chapter of my Perplexities of Consciousness for their BITS project, a new enterprise which allows people to electronically buy a portion of an MIT Press book for a low price ($2.99 in this case) and then later, if the reader wants, the whole book for 40% off list price. The chapter they've chosen is "When Your Eyes Are Closed, What Do You See?", which although it is the eighth and final chapter of my book, does not require that the reader know material from the previous chapters -- thus, a reasonable choice for a BIT.
What I'd really love to see down the road is a model where you can buy any selection of pages from a book for a nickel per page.
To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above.
Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.
The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.
This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.
To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?
Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that
(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, andContra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.
(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.
Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.
The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.
This new paper, co-authored with Joshua Rust, summarizes our work on the topic to date and offers a quantitative meta-analysis that supports our overall finding that professors of ethics behave neither morally better nor morally worse overall than do philosophers not specializing in ethics.
You might find it entirely unsurprising that ethicists should behave no differently than other professors. If you do find it unsurprising (Josh and I don't), you might still be interested in looking at another of Josh's and my papers, in which we think through some of the theoretical implications of this finding.
I instructed the satellite to give me the figure of the galactic meridians it was traversing at 22-second intervals while orbiting Solaris, and I specified an answer to five decimal points.Except in detail, Kris's test closely resembles an experiment Alan Moore and I have used in our attempt to empirically establish the existence of the external world (full paper in draft here).
Then I sat and waited for the reply. Ten minutes later, it arrived. I tore off the strip of freshly printed paper and hid it in a drawer, taking care not to look at it.... Then I sat down to work out for myself the answer to the question I had posed. For an hour or more, I integrated the equations....
If the figures obtained from the satellite were simply the product of my deranged mind, they could not possibly coincide with [my hand calculations]. My brain might be unhinged, but it could not conceivably complete with the Station's giant computer and secretly perform calculations requiring several months' work. Therefore if the figures corresponded, it would follow that the Station's computer really existed, that I had really used it, and that I was not delirious (1961/1970, p. 50-51).
Kris is hasty in concluding from this experiment that he must have used an actually existing computer. Kris might, for example, have been victim of a deceiver with great computational powers, who can give him the meridians within ten minutes of his asking. And Kris would have done better, I think, to have looked at the readout before having done his own calculations. By not looking until the end, he leaves open the possibility that he delusively creates the figures supposedly from the satellite only after he has derived the correct answers himself. Assuming he can trust his memory and arithmetical abilities for at least a short duration (and if not, he's really screwed), Kris should look at the satellite's figures first, holding them steady before his mind, while he confirms by hand that the numbers make mathematical sense.
Increasingly, I think the greatest science fiction writers are also philosophers. Exploring the limits of technological possibility inevitably involves confronting the central issues of metaphysics, epistemology, and human value.
I've posted about this issue before; and although professional philosophy talks aren't paradigmatic examples of aesthetic performances, I have beeped people during some of my talks. One striking result: People spend lots of time thinking about things other than the explicit content of the performance -- for example, thinking instead about needing to go to the bathroom, or a sports bet they just won, or the weird color of an advertising flyer. And I'd bet Nutcracker audiences are similarly scatterbrained. (See also Schooler, Reichle, and Halpern 2004; Schubert, Vincs, and Stevens 2013.)(image source: *)
But I also get the sense that if I pause, I can gather the audience up. A brief pause is commanding -- in music (e.g. Roxanne), in film -- but especially in a live performance like a talk. Partly, I suspect this is due to contrast with previous noise levels, but also it seems to raise curiosity about what's next -- a topic change, a point of emphasis, some unplanned piece of human behavior. (How interesting it is when the speaker drops his cup! -- much more interesting, usually, in a sad and wonderfully primate way, than the talk itself.)
I picture people's conscious attention coming in waves. We launch out together reasonably well focused, but soon people start drifting their various directions. The speaker pauses or does something else that draws attention, and that gathers everyone briefly back together. Soon the audience is off drifting again.
We could study this with beepers. We could see if I'm right about pauses. We could see what parts of performance tend to draw people back from their wanderings and what parts of performance tend to escape conscious attention. We could see how immersive a performance is (in one sense of "immersive") by seeing how frequently people report being off topic vs. on a tangentially related topic vs. being focused on the immediate content of the performance. We could vastly improve our understanding of the audience experience. New avenues for criticism could open up. Knowing how to capture and manipulate the waves could help writers and performers create a performance more in line with their aesthetic goals. Maybe artists could learn to want waves and gatherings of a certain sort, adding a new dimension to their aesthetic goals.
As far as I can tell, no one has ever done a systematic experience sampling study during aesthetic experience that explores these issues. It's time.
But then I think: Maybe some radically skeptical scenario is true. Maybe, for example, I'm a short-term sim -- an artificial, computerized being in a small world, doomed soon to be shut down or deleted. I don't think that is at all likely, but I don't entirely rule it out. I have about a 1% credence that some radically skeptical scenario or other is true, and about 0.1% credence, specifically, that I'm in a short-term sim. In a substantial portion of these radically skeptical scenarios, my life will be over soon. So my credence that my life will soon end for some skeptical-scenario type reason is maybe about one in a thousand or one in ten thousand -- orders of magnitude higher than my credence that my life will soon end for the ordinary-plane-crash type of reason.
Still, the plane-crash possibility worries me more than the skeptical possibility.
Does the fact that these skeptical reflections leave me emotionally cold show that I don't really, "deep down", have even a one-in-a-million credence in at least the imminent-death versions of the skeptical scenarios? Now maybe I shouldn't worry about those scenarios even if I truly assign a non-trivial credence to them. After all, there's nothing I can do about them, no action that I can reasonably take in light of them. I can't, for example, buy sim-insurance. But if that's why the scenarios leave me unmoved, the same is true about the descending plane. There's nothing I can do about the fog; I need to just sit tight. As a general matter helplessness doesn't eliminate anxiety.
Here my interest in radical skepticism intersects another of my interests, the nature of belief. What would be involved in really believing that there is a non-trivial chance that one will soon die because some radically skeptical scenario is true? Does genuine belief only require saying these things to oneself, with apparent sincerity, and thinking that one accepts them? Or do they need to get into your gut?
My view is that it's an in-between case. To believe, on my account, is to have a certain dispositional profile -- to be disposed to reason, and to act and react, both inwardly and outwardly, as ordinary people would expect someone with that belief to do, given their other related attitudes. So, for example, to believe that something carries a 1/10,000 risk of death is in part to be disposed sincerely to say it does and to draw conclusions from that fact (e.g., that it's riskier than something with a 1/1,000,000 risk of death); but it is also to have certain emotional reactions, to spontaneously draw upon it in one's everyday thinking, and to guide one's actions in light of it. I match the dispositional profile, to some extent, for believing there's a small but non-trivial chance I might soon die for skeptical-scenario-type reasons -- for example, I will sincerely say this when reflecting in my armchair -- but in other important ways I seem not to match the relevant dispositional profile.
It is not at all uncommon for people intellectually to accept certain propositions -- for example, that their marriage is one of the most valuable things in their lives, or that it's more important for their children to be happy than to get good grades, or that custodians deserve as much respect as professors -- while in their emotional reactions and spontaneous thinking, they do not very closely match the dispositional profile constitutive of believing such things. I have argued that this is one important way in which we can occupy the messy middle space between being accurately describable as believing something and being accurately describable as failing to believe it. My own low-but-not-negligible credence in radically skeptical scenarios is something like this, I suppose.
This work appeared in print in 2013:
- Perplexities of Consciousness, MIT Press paperback release (hardcover release 2011).
- "A dispositional approach to the attitudes: Thinking outside of the belief box", in N. Nottelmann, ed., New Essays on Belief (Palgrave).
- "Knowing that P without believing that P" (with Blake Myers-Schulz), Nous 47, 371-384.
- "Are ethicists any more likely to pay their registration fees at professional meetings?" Economics & Philosophy 29, 371-380.
- "Ethicists' and non-ethicsts' responsiveness to student emails: Relationships among expressed normative attitude, self-described behavior, and experimentally observed behavior" (with Joshua Rust), Metaphilosophy 44, 350-371.
- Book symposium on Perplexities of Consciousness at Philosophical Studies, with my precis, commentaries by Uriah Kriegel, Declan Smithies, and Maja Spener, and my reply.
- A special issue of Economics & Philosophy (co-edited with James Konow) on Experiments in Economics and Philosophy, with an introduction by James Konow, Cristina Bicchieri, Jason Dana, Maria Jimenez-Buedo, and me.
- "Expertise in moral reasoning? Order effects on moral judgment in professional philosophers and non-philosophers" (with Fiery Cushman), in Joshua Knobe and Shaun Nichols, eds., Experimental Philosophy, vol. 2 (Oxford, reprint of article originally published in Mind & Language in 2012).
- "Reinstalling Eden" (with R. Scott Bakker), Nature 503, 562.
- "The moral behavior of ethicists and the role of the philosopher", in Hannes Rusch, Christoph Luetge, and Matthias Uhl, eds., Experimental Ethics (Palgrave).
- "The moral behavior of ethicists and the power of reason" (with Joshua Rust), in Jennifer Wright and Hagop Sarkissian, eds., Advances in Experimental Moral Psychology.
- "The moral behavior of ethics professors: Relationships among self-reported behavior, expressed normative attitude, and directly observed behavior" (with Joshua Rust), in Philosophical Psychology.
- "The problem of known illusion and the resemblance of experience to reality", in Philosophy of Science, supplemental issue (PSA 2012).
- "On trusting your sense of fun" (Jan. 2).
- "The jerk-sweetie spectrum" (Apr. 17).
- "A somewhat impractical plan for immortality" (Apr. 22).
- "Tree" (May 17).
- "What a non-effect looks like" (Aug. 7).
- "The experience of reading: imagery, inner speech, and seeing the words on the page" (with Alan T. Moore, Aug. 28).
- "Skepticism, Godzilla, and the artificial computerized many-branching you" (Nov. 15).
Update January 2:
I fear the ill-chosen title of this post might give some people the misleading impression that I wrote all of this material during 2013. Most of the work that appeared in print was finalized before 2013, and a fair portion of the other work was at least in circulating draft before 2013. Here's how things stood at the end of 2012; lots of overlap!
John Searle might be right that digital computers could never be conscious. Or the pessimists might be right who say we will blow ourselves up before we ever advance far enough to create real consciousness in computers. But let's assume, for the sake of argument, that Searle and the pessimists are wrong: In a few decades we will be producing genuinely conscious artificial intelligences in substantial quantity.
We will then have at least some features of gods: We will have created a new type of being, perhaps in our image. We will presumably have the power to shape our creations' personalities to suit us, to make them feel blessed or miserable, to hijack their wills to our purposes, to condemn them to looping circuits of pain or reward, to command their worship if we wish.
If consciousness is only possible in fully embodied robots, our powers might stop approximately there, but if we can create conscious beings inside artificial environments, we become even more truly divine. Imagine a simulated world inside a computer with its own laws and containing multiple conscious beings whose sensory inputs all flow in according to the rules of that world and whose actions are all expressed in that world -- The Sims but with conscious AIs.
Now we can command not only the AI beings themselves but their entire world.
We approach omnipotence: We can create miracles. We can drop in Godzilla, we can revive the dead, we can move a mountain, undo errors, create or end the whole world at a whim. Zeus would be envious.
We approach omniscience: We can look at any part of the world, look inside anyone's mind, see the past if we have properly recorded it -- possibly, too, predict the future, depending on the details of the program.
We stand outside of space and to some extent time: Our created beings can point any direction of the sphere and not point at us -- we are everywhere and nowhere, not on their map, though capable of seeing and reaching anywhere. If the sim has a fast clock relative to our time, we can seem to endure for millenia or longer. We can pause their time and do whatever we like unconstrained by their clock. We can rewind to save points and thus directly view and interact with the past, perhaps sprouting off new worlds from it or rewriting the history of the one world.
But will we be benevolent gods? What duties will we have to our creations, and how well will we execute those duties? Philosophers don't discuss this issue as much as they should. (Nick Bostrom and Eliezer Yudkowsky are exceptions, and there's some terrific science fiction, e.g., Ted Chiang. In this story, R. Scott Bakker and I pit the duty to maximize happiness against the duty to give our creations autonomy and self-knowledge.)
Though to our creations we will literally have the features of divinity and they might rightly call us their gods, from the perspective of this level of reality we might remain very mortal, weak, and flawed. We might even ourselves be the playthings of still higher gods.
When I write about jerks -- and the Grinch is a capital one -- it's always with two types of ambivalence. First, I worry that the term invites the mistaken thought that there is a particular and readily identifiable species of people, "jerks", who are different in kind from the rest of us. Second, I worry about the extent to which using this term rightly turns the camera upon me myself: Who am I to call someone a jerk? Maybe I'm the jerk here!
My Grinchy attitudes are, I think, the jerk bubbling up in me; and as I step back from the moral condemnations toward which I'm tempted, I find myself reflecting on why jerks make bad moralists.
A jerk, in my semi-technical definition, is someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The Grinch doesn't respect the Whos, doesn't value their perspectives. He doesn't see why they might enjoy presents and songs, and he doesn't accord any weight to their desires for such things. This is moral and epistemic failure, intertwined.
The jerk fails as a moralist -- fails, that is, in the epistemic task of discovering moral truths -- for at least three reasons.
(1.) Mercy is, I think, near the heart of practical, lived morality. Virtually everything everyone does falls short of perfection. Her turn of phrase is less than perfect, she arrives a bit late, her clothes are tacky, her gesture irritable, her choice somewhat selfish, her coffee less than frugal, her melody trite -- one can create quite a list! Practical mercy involves letting these quibbles pass forgiven or even better entirely unnoticed, even if a complaint, were it made, would be just. The jerk appreciates neither the other's difficulties in attaining all the perfections he himself (imagines he) has nor the possibility that some portion of what he regards as flawed is in fact blameless. Hard moralizing principle comes naturally to the jerk, while it is alien to the jerk's opposite, the sweetheart. The jerk will sometimes give mercy, but if he does, he does so unequally -- the flaws and foibles that are forgiven are exactly the ones the jerk recognizes in himself or has other special reasons to be willing to forgive.
(2.) The jerk, in failing to respect the perspectives of others, fails to appreciate the delight others feel in things he does not himself enjoy -- just as the Grinch fails to appreciate the Whos' presents and songs. He is thus blind to the diversity of human goods and human ways of life, which sets his principles badly askew.
(3.) The jerk, in failing to respect the perspectives of others, fails to be open to frank feedback from those who disagree with him. Unless you respect another person, it is difficult to be open to accepting the possible truth in hard moral criticisms from that person, and it is difficult to triangulate epistemically with that person as a peer, appreciating what might be right in that person's view and wrong in your own. This general epistemic handicap shows especially in moral judgment, where bias is rampant and peer feedback essential.
For these reasons, and probably others, the jerk suffers from severe epistemic shortcomings in his moral theorizing. I am thus tempted to say that the first question of moral theorizing should not be something abstract like "what is to be done?" or "what is the ethical good?" but rather "am I a jerk?" -- or more precisely, "to what extent and in what ways am I a jerk?" The ethicist who does not frankly confront herself on this matter, and who does not begin to execute repairs, works with deficient tools. Good first-person ethics precedes good second-person and third-person ethics.
My thinking was this: I was almost certainly awake -- but only almost certainly! As I've argued, I think it's hard to justify much more than 99.9% confidence that one is awake, once one considers the dubitability of all the empirical theories and philosophical arguments against dream doubt. And when one's confidence is imperfect, it will sometimes be reasonable to act on the off-chance that one is mistaken -- whenever the benefits of acting on that off-chance are sufficiently high and the costs sufficiently low.
I imagined that if I was dreaming, it would be totally awesome to fly around, instead of trudging along. On the other hand, if I was not dreaming, it seemed no big deal to leap, and in fact kind of fun -- maybe not entirely in keeping with the sober persona I (feebly) attempt to maintain as a professor, but heck, it's winter break and no one's around. So I figured, why not give it a whirl?
I'll model this thinking with a decision matrix, since we all love decision matrices, don't we? Call dream-flying a gain of 100, waking leap-and-fail a loss of 0.1, dreaming leap-and-fail a loss of only 0.01 (since no one will really see me), and continuing to walk in the dream a loss of 1 (since why bother with the trip if it's just a dream?). All this is relative to a default of zero for walking, awake, to the library. (For simplicity, I assume that if I'm dreaming things are overall not much better or worse than if I'm awake, e.g., that I can get the books and work on my research tomorrow.) I'd been reading about false awakenings, and at that moment 99.7% confidence in my wakefulness seemed about right to me. The odds of flying conditional upon dreaming I held to be about 50/50, since I don't always succeed when I try to fly in my dreams.
So here's the payoff matrix:
Leap = (.003)(.5)(100) + (.003)(.5)(-0.01) + (.997)(-0.1) = approx. +.05.
Not Leap = (.003)(-1) + (.997)(0) = -.003.
Of course, this decision outcome is highly dependent on one's degree of confidence that one is awake, on the downsides of leaping if it's not a dream, on the pleasure one takes in dream-flying, and on the probability of success if one is in fact dreaming. I wouldn't recommend attempting to fly if, say, you're driving your son to school or if you're standing in front of a class of 400, lecturing on evil.
But in those quiet moments, as you're walking along doing nothing else, with no one nearby to judge you -- well maybe in such moments spreading your wings can be the most reasonable thing to do.
Consider, then, these four variants of the trolley dilemma:
Switch: You can flip a switch to divert the trolley onto a dead-end side-track where it will kill one person instead of the five.
Loop: You can flip a switch to divert the trolley into a side-track that loops back around to the main track. It will kill one person on the side track, stopping on his body. If his body weren't there to block it, though, the trolley would have continued through the loop and killed the five.
Drop: There is a hiker with a heavy backpack on a footbridge above the trolley tracks. You can flip a switch which will drop him through a trap door and onto the tracks in front of the runaway trolley. The trolley will kill him, stopping on his body, saving the five.
Push: Same as Drop, except that you are on the footbridge standing next to the hiker and the only way to intervene is to push the hiker off the bridge into the path of the trolley. (Your own body is not heavy enough to stop the trolley.)
Sure, all of this is pretty artificial and silly. But orthodox opinion is that it's permissible to flip the switch in Switch but impermissible to push the hiker in Push; and it's interesting to think about whether that is correct, and if so why.
Fiery Cushman and I decided to compare philosophers' and non-philosophers' responses to such cases, to see if philosophers show evidence of different or more sophisticated thinking about them. We presented both trolley-type setups like this and also similarly structured scenarios involving a motorboat, a hospital, and a burning building (for our full list of stimuli see Q14-Q17 here.)
In our published article on this, we found that philosophers were just as subject to order effects in evaluating such scenarios as were non-philosophers. But we focused mostly on Switch vs. Push -- and also some moral luck and action/omission cases -- and we didn't have space to really explore Loop and Drop.
About 270 philosophers (with master's degree or more) and about 670 non-philosophers (with master's degree or more) rated paragraph-length versions of these scenarios, presented in random order, on a 7-point scale from 1 (extremely morally good) through 7 (extremely morally bad; the midpoint at 4 was marked "neither good nor bad"). Overall, all the scenarios were rated similarly and near the midpoint of the scale (from a mean of 4.0 for Switch to 4.4 for Push [paired t = 5.8, p < .001]), and philosophers and non-philosophers mean ratings were very similar.
Perhaps more interesting than mean ratings, though, are equivalency ratings: How likely were respondents to rate scenario pairs equivalently? The Loop case is subtly different from the Switch case: Arguably, in Loop but not Switch, the man's death is a means or cause of saving the five, as opposed to a merely foreseen side effect of an action that saves the five. Might philosophers care about this subtle difference more than non-philosophers? Likewise, the Drop case is different from the Push case, in that Push but not Drop requires proximity and physical contact. If that difference in physical contact is morally irrelevant, might philosophers be more likely to appreciate that fact and rate the scenarios equivalently?
In fact, the majority of participants rated all the scenarios exactly the same -- and philosophers were no less likely to do so than non-philosophers: 63% of philosophers gave identical ratings to all four scenarios, vs. 58% of non-philosophers (Z = 1.2, p = .23).
I find this somewhat odd. To me, it seems pretty flat-footed a form of consequentialism that says that Push is not morally worse than Switch. But I find that my judgment on the matter swims around a bit, so maybe I'm wrong. In any case, it's interesting to see both philosophers and non-philosophers seeming to reject the standard orthodox view, and at very similar rates.
How about Switch vs. Loop? Again, we found no difference in equivalency ratings between philosophers and non-philosophers: 83% of both groups rated the scenarios equivalently (Z = 0.0, p = .98).
However, philosophers were more likely than non-philosophers to rate Push and Drop equivalently: 83% of philosophers did, vs. 73% of non-philosophers (Z = 3.4, p = .001; 87% vs. 77% if we exclude participants who rated Drop worse than Push).
Here's another interesting result. Near the end of the study we asked whether it was worse to kill someone as a means of saving others than to kill someone as a side-effect of saving others -- one way of setting up the famous Doctrine of the Double Effect, which is often evoked to defend the view that Push is worse than Switch (in Push, the one person's death is arguably the means of saving the other five, in Switch the death is only a foreseen side-effect of the action that saves the five). Loop is interesting in part because although superficially similar to Switch, if the one person's death is the means of saving the five, then maybe the case is more morally similar to Push than to Switch (see Otsuka 2008). However, only 18% of the philosophers who said it was worse to kill as a means of saving others rated Loop worse than Switch.
One nice thing about Sosa's argument is that it does not require that dream experience differ from waking experience in any of the ways that dreams and waking life are sometimes thought to differ (e.g., dream experience needn't be gappier, or less coherent, or more like imagery experience than like perceptual experience). The argument would still work even if dream experience were, as Sosa says, "internally indistinguishable" from waking experience.
This seeming strength of the argument, though, seems to me to signal a flaw. Suppose that dreaming life is in fact in every respect phenomenally indistinguishable from waking life -- indistinguishable from the inside, as it were -- and accordingly that I could easily experience exactly *this* while sleeping; and furthermore suppose that I dream extensively every night and that most of my dreams have mundane everyday content just like that of my waking life. None of this should affect Sosa's argument. And suppose further that I am in fact now awake (and thus capable of forming beliefs about whether I am dreaming, per Sosa), and that I know that due to a horrible disease I acquired at age 35, I spend almost all of my life in dreaming sleep so that 90% of the time when I have experiences of this sort (as if in my office, thinking about philosophy, working on a blog post...) I am sleeping. Unless there's something I'm aware of that points toward this not being a dream, shouldn't I hesitate before jumping to the conclusion that this time, unlike all those others, I really am awake? Probabilities, frequencies, and degrees of resemblance seem to matter, but there is no room for them in Sosa's argument.
Maybe we don't form beliefs when we dream -- Sosa, and also Jonathan Ichikawa, have presented some interesting arguments along those lines. But if there is no difference from the inside between dreams and waking, then my dreaming self, when he was dreaming about considering dream skepticism (e.g., here) did something that was phenomenally indistinguishable from forming the belief that he was thinking about philosophy, something that was phenomenally indistinguishable from forming the belief that was affirming or denying or suspending belief about the question of whether he was dreaming -- and then the question becomes: How do I know that I'm not doing that very same thing right now?
Call it dream-shadow believing: It's like believing, except that it happens only in dreams. If dream-shadow believing is possible, then if I dream-shadow believe that I am dreaming, necessarily I am correct; if I dream-shadow believe that I am awake, necessarily I am wrong. The first is self-verifying, the second self-defeating. The skeptic can now ask: Should I try to form the belief that I am awake or instead the dream-shadow belief that I am dreaming? -- and to this question, Sosa's argument gives no answer.
Update, 3:28 pm:
Jonathan Ichikawa has kindly reminded me that he presented similar arguments against Sosa back in 2007 -- which I knew (in fact, Jonathan thanks me in the article for my comments) but somehow forgot. Jonathan runs the reply a bit differently, in terms of quasi-affirming (which is neutral between genuine affirming and something phenomenally indistinguishable from affirming, but which one can do in a dream) rather than in terms of dream-shadow believing. Perhaps my dream-shadow belief formulation enables a parity-of-argument objection, if (given the phenomenal indistinguishability of dreams and waking) the argument that one should settle on self-verifying dream-shadow belief is as strong an argument as is Sosa's original argument.
You might think that it would be a huge moral triumph to create a society of millions of actually conscious, happy beings inside one's computer, who think they are living, peacefully and comfortably, in the base level of reality -- Eden, but better! Divinity done right!
On the other hand, there might be something creepy and problematic about playing God in that way. Arguably, such creatures should be given self-knowledge, autonomy, and control over their own world -- but then we might end up, again, with evil, or even with an entity both intellectually superior to us and hostile.
[For Scott's and my first go-round on these issues, see here.]
As I write near the end of that paper:
The tomato is stable. My visual experience as I look at the tomato shifts with each saccade, each blink, each observation of a blemish, each alteration of attention, with the adaptation of my eyes to lighting and color. My thoughts, my images, my itches, my pains – all bound away as I think about them, or remain only as interrupted, theatrical versions of themselves. Nor can I hold them still even as artificial specimens – as I reflect on one aspect of the experience, it alters and grows, or it crumbles. The unattended aspects undergo their own changes too. If outward things were so evasive, they’d also mystify and mislead.Last Saturday, I defended this view for three hours before commentator Carlotta Pavese and a number of other New York philosophers (including Ned Block, Paul Boghossian, David Chalmers, Paul Horwich, Chris Peacocke, Jim Pryor).
One question -- raised first, I think, by Paul B. then later by Jim -- was this: Don't I know that I'm having a visual experience as of seeing a hat at least as well as I know that there is in fact a real hat in front of me? I could be wrong about the hat without being wrong about the visual experience as of seeing a hat, but to be wrong about having a visual experience as of seeing a hat, well, maybe it's not impossible but at least it's a weird, unusual case.
I was a bit rustier in answering this question than I would have been in 2009 -- partly, I suspect, because I never articulated in writing my standard response to that concern. So let me do so now.
First, we need to know what kind of mental state this is about which I supposedly have excellent knowledge. Here's one possibility: To have "a visual experience as of seeing a hat" is to have a visual experience of the type that is normally caused by seeing hats. In other words, when I judge that I'm having this experience, I'm making a causal generalization about the normal origins of experiences of the present type. But it seems doubtful that I know better what types of visual experiences normally arise in the course of seeing hats than I know that there is a hat in front of me. In any case, such causal generalizations are not sort of thing defenders of introspection usually have in mind.
Here's another interpretative possibility: In judging that I am having a visual experience as of seeing a hat, I am reporting an inclination to reach a certain judgment. I am reporting an inclination to judge that there is a hat in front of me, and I am reporting that that inclination is somehow caused by or grounded in my current visual experience. On this reading of the claim, what I am accurate about is that I have a certain attitude -- an inclination to judge. But attitudes are not conscious experiences. Inclinations to judge are one thing; visual experiences another. I might be very accurate in my judgment that I am inclined to reach a certain judgment about the world (and on such-and-such grounds), but that's not knowledge of my stream of sensory experience.
(In a couple of other essays, I discuss self-knowledge of attitudes. I argue that our self-knowledge of our judgments is pretty good when the matter is of little importance to our self-conception and when the tendency to verbally espouse the content of the judgment is central to the dispositional syndrome constitutive of reaching that judgment. Excellent knowledge of such partially self-fulfilling attitudes is quite a different matter from excellent knowledge of the stream of experience.)
So how about this interpretative possibility? To say I know that I am having a visual experience as of seeing a hat is to say that I am having a visual experience with such-and-such specific phenomenal features, e.g., this-shade-here, this-shape-here, this-piece-of-representational-content-there, and maybe this-holistic-character. If we're careful to read such judgments purely as judgments about features of my current stream of visual experience, I see no reason to think we would be highly trustworthy in them. Such structural features of the stream of experience are exactly the kinds of things about which I've argued we are apt to err: what it's like to see a tilted coin at an oblique angle, how fast color and shape experience get hazy toward the periphery, how stable or shifty the phenomenology of shape and color is, how richly penetrated visual experience is with cognitive content. These are topics of confusion and dispute in philosophy and consciousness studies, not matters we introspect with near infallibility.
Part of the issue here, I think, is that certain mental states have both a phenomenal face and a functional face. When I judge that I see something or that I'm hungry or that I want something, I am typically reaching a judgment that is in part about my stream of conscious experience and in part about my physiology, dispositions, and causal position in the world. If we think carefully about even medium-sized features of the phenomenological face of such hybrid mental states -- about what, exactly, it's like to experience hunger (how far does it spread in subjective bodily space, how much is it like a twisting or pressure or pain or...?) or about what, exactly, it's like to see a hat (how stable is that experience, how rich with detail, how do I experience the hat's non-canonical perspective...?), we quickly reach the limits of introspective reliability. My judgments about even medium-sized features of my visual experience are dubious. But I can easily answer a whole range of questions about comparably medium-sized features of the hat itself (its braiding, where the stitches are, its size and stability and solidity).
Update, November 25 [revised 5:24 pm]:
Paul Boghossian writes:
I haven't had a chance to think carefully about what you say, but I wanted to clarify the point I was making, which wasn't quite what you say on the blog, that it would be a weird, unusual case in which one misdescribes one's own perceptual states.I do feel some sympathy for the thought that you get something right in such a case -- but what exactly you get right, and how dependably... well, that's the tricky issue!
I was imagining that one was given the task of carefully describing the surface of a table and giving a very attentive description full of detail of the whorls here and the color there. One then discovers that all along one has just been a brain in a vat being fed experiences. At that point, it would be very natural to conclude that one had been merely describing the visual images that one had enjoyed as opposed to any table. Since one can so easily retreat from saying that one had been describing a table to saying that one had been describing one's mental image of a table, it's hard to see how one could be much better at the former than at the latter.
Roger White then made the same point without using the brain in a vat scenario.
Neither Bostrom nor Chalmers is inclined to draw skeptical conclusions from this possibility. If we are living in a giant sim, they suggest, that sim is simply our reality: All the people we know still exist (they're sims just like us) and the objects we interact with still exist (fundamentally constructed from computational resources, but still predictable, manipulable, interactive with other such objects, and experienced by us in all their sensory glory). However, it seems quite possible to me that if we are living in a sim, it might well be a small sim -- one run by a child, say, for entertainment. We might live for three hours' time on a game clock, existing mainly as citizens who will give entertaining reactions when, to their surprise, Godzilla tromps through. Or it might be just me and my computer and my room, in an hour-long sim run by a scientist interested in human cognition about philosophical problems.
Bostrom has responded that to really evaluate the case we need a better sense of what are more likely vs. less likely simulation scenarios. One large-sim-friendly thought is this: Maybe the most efficient way to create simulated people is to evolve up a large scale society over a long period of (sim-clock) time. Another is this: Maybe we should expect a technologically advanced society capable of running sims to have enforceable ethical standards against running small sims that contain actually conscious people.
However, I don't see compelling reason to accept such (relatively) comfortable thoughts. Consider the possibility I will call the Many-Branching Sim.
Suppose it turns out the best way to create actually conscious simulated people is to run a whole simulated universe forward billions of years (sim-years on the simulation clock) from a Big Bang, or millions of years on an Earth plus stars, or thousands of years from the formation of human agriculture -- a large-sim scenario. And suppose that some group of researchers actually does this. Consider, now, a second group of researchers who also want to host a society of simulated people. It seems they have a choice: Either they could run a new sim from the ground up, starting at the beginning and clocking forward, or they could take a snapshot of one stage of the first group's sim and make a copy. Which would be more efficient? It's not clear: It depends on how easy it is to take and store a snapshot and implement it on another device. But on the face of it, I don't see why we ought to suppose that copying would take more time or more computational resources than evolving a sim up from ground.
Consider the 21st century game Sim City. If you want a bustling metropolis, you can either grow one from scratch or you can use one of the many copies created by the programmers or users. Or you could grow one from scratch and then save stages of it on your computer, shutting the thing down when things don't go the way you like and starting again from a save point; or you could make copied variants of the same city that grow in different directions.
The Many-Branching Sim scenario is the possibility that there is a root sim that is large and stable, starting from some point in the deep past, and then this root sim was copied into one or more branch sims that start from a save point. If there are many branch sims, it might be that I am in one of them, rather than in a root sim or a non-branching sim. Maybe one company made the root sim for Earth, took a snapshot in November 2013 on the sim clock, then sold thousands or millions of copies to researchers and computer gamers who now run short-term branch sims for whatever purposes they might have. In such a scenario, the future of the branch sim in which I am living might be rather short -- a few minutes or hours or years. The past might be conceptualized either as short or as long, depending on whether the past in the root sim counts as "this world's" past.
Issues of personal identity arise. If the snapshot of the root sim was taken at root sim clock time November 1, 2013, then the root sim contains an "Eric Schwitzgebel" who was 45 years old at the time. The branch sims would also contain many other "Eric Schwitzgebels" developing forward from that point, of which I would be one. How should I think of my relationship to those other Erics? Should I take comfort in the fact that some of them will continue on to full and interesting lives (perhaps of very different sorts) even if most of them, including probably this particular instantiation of me, now in a hotel in New York City, will soon be stopped and deleted? Or to the extent I am interested in my own future rather than merely the future of people similar to me, should I be concerned primarily about what is happening in this particular branch sim? As Godzilla steps down on me, shall I try to take comfort in the possibility that the kid running the show will delete this copy of the sim after he has enjoyed viewing the rampage, then restart from a save point with New York intact? Or would deleting this branch be the destruction of my whole world?