• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Monday, 14 Apr 2014 12:10
Weird Tales, one of the best and oldest horror and dark fantasy magazines, has just launched a new series of ultra-short flash fiction (under 500 words), Flashes of Weirdness. To inaugurate the series, they've chosen a piece of mine -- which is now my second publication in speculative fiction.

My philosophical aim in the story -- What Kelp Remembers -- is to suggest that on a creationist or simulationist cosmology, the world might serve a very different purpose than we're normally inclined to think.

At some point, I want to think more about the merit of science fiction as a means of exploring metaphysical and cosmological issues of this sort. I suspect that fiction has some advantages over standard expository prose as a philosophical tool in this area, but I'm not satisfied that I really understand why.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "science fiction, speculative fiction, tr..."
Send by mail Print  Save  Delicious 
Date: Sunday, 13 Apr 2014 11:18
We might soon be creating monsters, so we'd better figure out our duties to them.

Robert Nozick's Utility Monster derives 100 units of pleasure from each cookie she eats. Normal people derive only 1 unit of pleasure. So if our aim is to maximize world happiness, we should give all our cookies to the monster. Lots of people would lose out on a little bit of pleasure, but the Utility Monster would be really happy!

Of course this argument generalizes beyond cookies. If there were a being in the world vastly more capable of pleasure and pain than are ordinary human beings, then on simple versions of happiness-maximizing utilitarian ethics, the rest of us ought to immiserate ourselves to push it up to superhuman pinnacles of joy.

Now, if artificial consciousness is possible, then maybe it will turn out that we can create Utility Monsters on our hard drives. (Maybe this is what happens in R. Scott Bakker's and my story Reinstalling Eden.)

Two questions arise:

(1.) Should we work to create artificially conscious beings who are capable of superhuman heights of pleasure? On the face of it, it seems like a good thing to do, to bring beings capable of great pleasure into the world! On the other hand, maybe we have no general obligation to bring happy beings into the world. (Compare: Many people think we have no obligation to increase the number of human children even if we think they would be happy.)

(2.) If we do create such beings, ought we immiserate ourselves for their happiness? It seems unintuitive to say that we should, but I can also imagine a perspective on which it makes sense to sacrifice ourselves for superhumanly great descendants.

The Utility Monster can be crafted in different ways, possibly generating different answers to (1) and (2). For example, maybe simple sensory pleasure (a superhumanly orgasmic delight in cookies) wouldn't be enough to compel either (1) creation or (2) if creation sacrifice. But maybe "higher" pleasures, such as great aesthetic appreciation or great intellectual insight, would. Indeed, if artificial intelligence plays out right, then maybe whatever it is about us that we think gives our lives value, we can artificially duplicate it a hundredfold inside machines of the right type (maybe biological machines, if digital computers won't do).

You might think, as Nozick did, and as Kantian critics of utilitarianism sometimes do, that we can dodge utility monster concerns by focusing on the rights of individuals. Even if the Monster would get 100 times as much pleasure from my cookie as I would, it's my cookie; I have a right to it and no moral obligation to give it to her.

But similar issues arise if we allow Fission/Fusion Monsters. If we say "one conscious intelligence, one vote", then what happens when I create a hundred million conscious intelligences in my computer? If we say "one unemployed consciousness, one cookie from the dole", then what happens if my Fission/Fusion Monster splits into a hundred million separate individual unemployed conscious beings, collects its cookies, and then in the next tax year merges back into a single cookie-rich being? A Fission/Fusion Monster could divide at will into many separate individuals, each with a separate claim to rights and privileges as an individual; and then whenever convenient, if the group so chose (or alternatively via some external trigger), fuse back together into a single massively complex individual with first-person memories from all its predecessors.

(See also: Our Possible Imminent Divinity.)

[image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "ethics, personal identity, speculative f..."
Send by mail Print  Save  Delicious 
Date: Friday, 11 Apr 2014 22:38
I think the most recent meta-analysis of the relationship between religosity and crime is still Baier and Wright 2001. I'm reviewing it again in preparation for a talk I'm giving Sunday on what happens when there's a non-effect in psychology but researchers are disposed to think there must be an effect.

I was struck by this graph from the Baier and Wright:

Note that the x-axis scale is negative, showing the predicted negative relationship between religiosity and crime. (Religiosity is typically measured either by self-reported religious belief or by self-reported religious behavior such as attendance at weekly services.)

The authors comment:

The mean reported effect size was r = -.12 (SD = .09), and the median was r = -.11. About two-thirds of the effects fell between -.05 and -.20, and, significantly, none of them was positive. (p. 14, emphasis added).
Hm, I think. No positive tail?! I'm not sure that I would interpret that fact the same way Baier and Wright seem to.

Then I think: Hey, let's try some Monte Carlos!

Baier and Wright report 79 effect sizes from previous studies, graphed above. Although the distribution doesn't look quite normal, I'll start my Monte Carlos by assuming normality, using B&W;'s reported mean and SD. Then I'll generate 10,000 sets of 79 random values (representating hypothetical effect sizes) normally distributed around that mean and standard deviation (SD).

Of the 10,000 simulated distributions of 79 effect sizes with that mean and SD, only 9 distributions (0.09%) are entirely zero to negative. So I think we can conclude that it's not chance that the positive tail is missing. Options are: (a.) The population mean is higher than B&W; report or the SD is lower, (b.) The distribution isn't normal, (c.) The positive effect-size studies aren't being reported.

My money is on (c). But let's try (a). How high would the mean have to be (holding SD fixed) for at least 20% of the Monte Carlos to show no positive values? In my Monte Carlos it happens between mean -.18 and -.19. But the graph above is clearly not a graph of a sample from a population with that mean (which would be near the top of the fourth bar from left). This is confirmable by a t-test on the distribution of effect sizes reported in their study (one-sample vs. -.185, p < .001). Similar considerations show that it can't be an SD issue.

How about (b)? The eyeball distribution looks a bit skewed, anyway -- maybe that's the problem? The graph can be easily unskewed simply by taking the square root of the absolute values of the effect sizes. The resulting distribution is very close to normal (both eyeball and by Anderson-Darling). This delivers the desired conclusion: Only 35% of my Monte Carlos end up with even a single positive-tail study, but it delivers this result at the cost of making sense. Taking the square root magnifies the difference between very small effect sizes and diminishes the difference between large effect sizes, inflating the difference between a study with effect size r = .00 and a study with effect size r = -.02 to a larger magnitude difference than the difference between effect size r = -.30 and effect size r = -.47. (All these r's are actually present in the B&W; dataset.) The two r = .00 studies in the B&W; dataset become outliers far from the three r = -.02 studies in their dataset, and it's this artificial inflation of that immaterial difference that explains the seeming Monte Carlo confirmation after the square-root "correction".

So the best explanation would seem to be (c): We're not seeing the missing tail because, at least as of 2001, the research that would be expected, even if just by chance, to show even a non-significant positive relationship between religiosity and crime simply isn't published.

If researchers also show a systematic bias toward publishing their research that shows the largest negative relationship between religiosity and crime, we can even get something like Baier and Wright's distribution with a mean effect size of zero.

Here's the way I did it: I assumed that mean effect size of religiosity on crime is 0.0 and the SD for the effect size among the studies was 0.12. I assumed 100 researchers, 25% of whom only ran one independent analysis, 25% of whom ran 2 analyses, 25% of whom ran 4, and 25% of whom ran 8. I assumed that each researcher published only their "best" result (i.e., greatest negative relationship), but only if the trend was non-positive. I then ran 10,000 Monte Carlos. The average number of studies published was 80, the average published study's effect size was r = -.12, and the average SD of the effect sizes was .08.

And it wasn't too hard to find a graph like this:

Pretty similar except for Baier & Wright's two outlier studies.

I don't believe that this analysis shows that religion and crime are unrelated. I suspect they are related, if in no other way than by means of uncontrolled confounds. But I do think this analysis suggests that a non-effect plus a substantial positivity bias in publication could result in a pattern of reported effects that looks a lot like the pattern that is actually reported.

This is, of course, a file-drawer effect, and perhaps it could be corrected by a decent file-drawer analysis. But: Baier and Wright don't attempt such an analysis. And maybe more importantly: The typical Rosenthal-style file-drawer analysis assumes that the average unpublished result has an effect size of zero, whereas the effect above involves removing wrong-sign studies disproportionately often, and so couldn't be fully corrected by such an analysis.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "moral psychology, psychological methods"
Send by mail Print  Save  Delicious 
Date: Thursday, 10 Apr 2014 10:33
Interesting philosophy festival coming up in late May, in western England: Speakers are drawn from a wide range of fields in addition to philosophy, both in the sciences and arts. Among the philosophy speakers are Thomas Pogge, Huw Price, John Heil, Simon Blackburn, Angie Hobbs, Ted Honderich, Margaret Boden, Mark Rowlands, Jennifer Hornsby, Nancy Cartwright, Barry C. Smith, James Ladyman, Daniel Stoljar, Bernard-Henri Levy, Hubert Dreyfus, and Mary Midgley. Lots of other super-cool folks too: Stephen King, Roger Penrose, Cory Doctorow....

Wish I could be there!

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Monday, 07 Apr 2014 18:18
Tania Lombrozo's newest post at NPR reminded me of a phenomenon I've often noticed: After going away on a trip for several days, when I return home it seems to me that my children have grown enormously over those few days!

It's not that they've actually grown, of course. My hypothesis is this: During my time away, my memory of my children grows a bit vaguer. Whereas my memory of them when I come home tonight might be an average of their appearance over the last few days, my memory when I come home after a week away might be an average of their appearance over a longer span of time -- maybe a month or two. Then when I return, they seem to have done a month's worth of growing in just that one week. The effect has been most striking during the periods my children have grown fastest (infancy to early childhood, and then my son's incredible middle-school growth spurt).

I'm not sure I'd test this hypothesis by drawing lines on the wall, as the researchers did in the article Lombrozo discusses. I suspect that my memory of my children's height is much more accurate than can be measured by wall markings -- e.g., that I'd easily notice an inch of growth, even if I might be off by several inches if asked to estimate their heights on a blank wall. A more valid measure, if it can be done right, might be to artificially age a picture by tweaking it slightly toward or away from the kindchenschema (the characteristic infantile facial features that slowly fade as we age).

Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Friday, 04 Apr 2014 22:22
What is introspection? Nothing! Or rather, almost everything.

A long philosophical tradition, going back at least to Locke, has held that there is a distinctive faculty by means of which we know our own minds -- or at least our currently ongoing stream of conscious experience, our sensory experience, our imagery, our emotional experience and inner speech. "Reflection" or "inner sense" or introspection is, in this common view, a single type of process, yielding highly reliable (maybe even infallibly certain) knowledge of our own minds.

Critics of this approach to introspection have tended to either:

(a.) radically deny the existence of the human capacity to discover a stream of inner experience (e.g., radical behaviorism);

(b.) attribute our supposedly excellent self-knowledge of experience to some distinctive process other than introspection (e.g., expressivist or transparency approaches, on which "I think that..." is just a dongle added to a judgment about the outside world, no inward attention or scanning required); or

(c.) be pluralistic in the sense that we have one introspective mechanism to scan our beliefs, another to scan our visual experiences, another to scan our emotional experiences....

But here's another possibility: Introspective judgments arise from a range of processes that is diverse both within-case (i.e., lots of different processes feeding any one judgment) and between-case (i.e., very different sets of processes contributing to the judgment on different occasions) and yet also allows that introspective judgments arise partly through a relatively direct sensitivity to the conscious experiences that they are judgments about.

Consider an analogy: You're at a science conference or a high school science fair, quickly trying to take in a poster. You have no dedicated faculty of poster-taking-in. Rather, you deploy a variety of cognitive resources: visually appreciating the charts, listening to the presenter's explanation, simultaneously reading pieces of the poster, charitably bringing general knowledge to bear, asking questions and listening to responses both for overt content and for emotional tone.... It needn't be the same set of resources every time (you needn't even use vision: sometimes you can just listen, if you're in the mood or visually impaired). Instead, you flexibly, opportunistically use a diverse range of resources, dedicated to the question of what are the main ideas of this poster, in a way that aims to be relatively directly sensitive to the actual content of the poster.

Introspection, in my view, is like that. If I want to know what my visual experience is right now, or my emotional experience, or my auditory imagery, I engage not one cognitive process that was selected or developed primarily for the purpose of acquiring self-knowledge; rather I engage a diversity of processes that were primarily selected or developed for other purposes. I look outward at the world and think about what, given that world, it would make sense for me to be experiencing right now; but also I am attuned to the possibility that I might not be experiencing that, ready to notice clues pointing a different direction. I change and shape my experience in the very act of thinking about it, often (but not always) in a way that improves the match between my experience and my judgment about it. I have memories (short- and long-term), associations, things that it seems more natural and less natural to say, views sometimes important to my self-image about what types of experience I tend to have, either in general or under certain conditions, emotional reactions that color or guide my response, spontaneous speech impulses that I can inhibit or disinhibit. Etc. And any combination of these processes, and others besides, can swirl together to precipitate a judgment about my ongoing stream of experience.

Now the functional set-up of the mind is such that some processes' outputs are contingent upon the outputs of other processes. Pieces of the mind stay in sync with what is going on in other pieces, keep a running bead on each other, with varying degrees of directness and accuracy. And so also introspective judgments will be causally linked to a wide variety of other cognitive processes, including normally both relatively short and relatively circuitous links from the processes that give rise to the conscious experiences that the introspective judgments are judgments about. But these kinds of contingencies imply no distinctive introspective self-scanning faculty; it's just how the mind must work, if it is to be a single coherent mind, and it happens preconsciously in systems no-one thinks of as introspective, e.g., in the early visual system, as well as farther downstream.

[For further exposition of this view, with detailed examples, see my essay Introspection, What?]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "introspection, self-knowledge"
Send by mail Print  Save  Delicious 
Date: Friday, 28 Mar 2014 08:26
Wow. Pete Mandik and Richard Brown on idealist metaphysics, computer simulation, self-knowledge of consciousness, free will and moral realism.... They plunge right to the core.
Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Friday, 21 Mar 2014 10:57
Chutes and Ladders, if you didn't know, isn't just a game of chance. It's a game of virtue. At the bottom of each ladder a virtuous action is depicted, and at the top we see its reward. Above each chute is a vice, at the bottom natural punishment. The world of Chutes and Ladders is the world of perfect immanent justice! Virtue always pays and vice is always punished, and always through natural mechanisms rather than by the action of any outside authority, much less divine authority.

Here's a picture of my board at home:

One striking thing: What 21st-century Anglophone philosophers would normally call "prudential" virtues and what 21st-century Anglophone philosophers would normally call "moral" virtues are treated exactly on par, as though they were entirely the same sort of thing.

In square 1, Elmo plants seeds. (Prudential!) Laddering to square 38, he reaps his bouquet. In square 9, Ernie helps Bert carry Bert's books. (Moral!) Laddering to square 31, we see Ernie and Bert enjoying soccer together. In square 64 Bert is running without looking (prudential) and he slips on a banana peel, chuting down to square 60. In square 16, Zoe teasingly hides Elmo's alphabet block from him (moral), and she chutes down to square 6, losing the pleasure of Elmo's company.

It's my first-grade daughter's favorite game right now (though she seems to like it even more when we play it upside down, celebrating vice).

Consider the Boy Scout code: trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent. Wait, "clean"? Or the seven deadly sins: lust, gluttony, greed, sloth, wrath, envy, and pride. My sense is that cross-culturally and historically long term prudential self-interest and short- and long-term moral duty tend to be lumped together into the category of virtues, not sharply distinguished, all primarily opposed to short-term self-interest.

It's a nice fantasy, the fantasy of mainstream moral educators across history -- that we live in a Chutes-and-Ladders world. And in tales and games you don't even need to do the long-term waiting bit: just ladder right up! I see why my daughter enjoys it. But does it make for a good moral education? Maybe so. One would hope there's wisdom embodied in the Chutes-and-Ladders moral tradition.

One possibility is that it's a bait-and-switch. That's how I'm inclined to read the early Confucian tradition (Mencius, Xunzi, though if so it's below the surface of the texts). The Chutes-and-Ladders world is offered as a kind of hopeful lie, to lure people onto the path of valuing morality as a means to attain long-term self-interest. But once one goes far enough down this path, though the lie becomes obvious, simultaneously the means starts to become an end valued for its own sake, even overriding the long-term selfish goals that originally motivated it. We come, eventually, to help Bert with his books even when it chutes us down rather than ladders us up.

After all, we see the same thing with pursuit of money, don't we?

Update 6:38: As my 14-year-old son points out, one other feature of Chutes and Ladders is that there's no free will. It's all chance whether you end up being virtuous. So in that sense, justice is absent (note March 21: though maybe though it's chance relative to the player, that chance represents the free will of the pawn).

Update March 21: As several people at New APPS have pointed out, Chutes and Ladders originated in ancient India as Snakes and Ladders. According to this site, the original virtues were faith, reliability, generosity, knowledge, asceticism; the original vices disobedience, vanity, vulgarity, theft, lying, drunkenness, debt, rage, greed, pride, murder, and lust. (A lot more snakes than ladders in India than on Sesame Street!)

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "moral development"
Send by mail Print  Save  Delicious 
Date: Friday, 14 Mar 2014 18:34
I've been arguing that if materialism is true, the United States is probably conscious. My argument is essentially this: Materialists should accept that all kinds of weirdly-formed aliens would be conscious, if they act intelligently enough; and the U.S. is basically a weirdly-formed alien.

One objection is that if an alien is weirdly enough constructed, we should deny that it's conscious, or at least withhold judgment, regardless how similar it is to us in outward behavior. Consciousness requires not only intelligent behavior but also an internal organization similar to our own.

Now I grant that a certain amount of complex structural organization is necessary if a being is to exhibit sophisticated outward behavior; a hunk of balsa wood won't do it. But it's plausible that a vast array of wildly different structural organizations could give rise to complex human-like behavior -- parallel processing or fast serial processing, carbon or silicon, spatially compact entities or spatially distributed entities, mostly subsystem driven or mostly centrally driven, and at all sorts of time scales. I've explored a few weird examples in previous blog posts: Betelgeusian beeheads, Martian smartspiders, and group minds on Ringworld.

Suppose we grant, then, that there's a vast array of possible -- indeed in a large enough universe probably actual -- beings with behavior of human-like sophistication, emitting complex seemingly communicative structures, seeming to flexibly protect themselves and flexibly exploit resources to enhance their longevity and power, seeming to track their own interior states in complex ways, and seeming to produce long philosophical and psychological treatises about their mental lives, including their streams of conscious experience. Would it be reasonable to think that although we have phenomenal consciousness (qualia, subjective experience, what-it's-like-ness), they don't, if they're not enough like us on the inside?

Consider the Copernican Principle of cosmological method. According to the Copernican Principle, we should tend to assume that we are not in a specially favored position in the universe (such as the exact center). Our position in the universe is mediocre, not privileged, not especially lucky. My central thought for this post is: Denying consciousness to weirdly structured but behaviorally sophisticated aliens would be a violation of the Copernican Principle.

Suppose we thought human biological neurons were necessary for conscious experience and that no being made of silicon or magnets or beer cans and wire, and lacking human-like neurons, could be conscious, regardless how sophisticated its patterns of outward behavior. Then suppose we met these magnetic aliens and we learned to communicate in each other's languages (or seeming-languages). Perhaps they come to Earth, begin to utter sounds that we naturally interpret as English, interact with us, and -- because they are so delightfully similar in outward behavior -- become our friends, spouses, and business partners. On the un-Copernican view I reject, we human beings could justifiably say: Nyah, nyah, we're conscious and you aren't! We got neurons, you didn't. We're awesomely special in a way you're not! (Fortunately, the magnetic aliens' feelings won't be hurt, since they will have no real feelings -- though they sure might behave as though insulted.) Our functional organization would be importantly different from all other functional organizations of similar sophistication in that it alone would have phenomenal consciousness attached. This would seem to be a violation of mediocrity, a claim of special favor, weird humanocentric parochialism.

Similarly, of course, for distinctions based on parallel vs. fast serial processing or spatially compact vs. spatially distributed processing, or whatever.

Even if we confess substantial doubt, we might be guilty of anti-Copernican bias. Here's a possible argument: I know that creatures with neurons can be conscious because I am one and I know through introspection that I'm conscious; but I don't know that magnetic beings behaviorally indistinguishable from me can be genuinely phenomenomally conscious, because I have no direct introspective access to their mentality, and the structural differences are large enough that there's room for considerable doubt in inferring from my own case to theirs. In my more skeptical moods I'm quite tempted by this argument.

But I think the argument is probably un-Copernican. It's tantamount to thinking that we neuron-owners might be specially privileged. Maybe we are at the center of the universe! -- not physically, of course, but consciously. A map of the distribution of mentality in the universe might put dots for behavioral sophistication all over the place, but the big red dot for true phenomenal consciousness might go only on us!

Now the Copernican Principle isn't inviolable. It could have turned out that we were at the geometric center of the universe. So maybe it could turn out Earth indeed is just the lucky spot where sophisticated behavioral responsiveness, self-monitoring, and linguistic-seeming communication is grounded in consciousness-supporting neurons rather than mere zombie-magnets (or zombie-hydraulics, or zombie-silicon, or whatever). But entertaining that view other than as a radically skeptical possibility is a parochialism that I doubt would justifiably survive real contact with an alien species -- or even a good, long immersion in well-constructed science fiction thought experiments.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "cosmology, metaphysics, science fiction,..."
Send by mail Print  Save  Delicious 
Date: Thursday, 06 Mar 2014 17:47
You might think that empirically grounded radical skepticism is self-defeating.

Consider dream skepticism. Suppose I have, or think I have, empirical grounds for believing that dreams and waking life are difficult to tell apart. On those grounds, I think that my experience now, which I'd taken to be waking experience, might actually be dream experience. But if I might now be dreaming, then my current opinions (or seeming opinions) about the past all become suspect. I no longer have good grounds for thinking that dreams and waking life are difficult to tell apart. Boom!

(That was supposed to be the sound of a skeptical argument imploding.)

Stephen Maitzen has recently been advancing an argument of roughly that sort: that the skeptic "must attribute to us justified empirical beliefs of the very kind the argument must deny us" (p. 30). Similarly, G.E. Moore, in "Certainty", argues that dream skeptics assume that they know that dreams have occurred, and that if one is dreaming one does not know that dreams have occurred. (Boom.)

One problem with this self-defeat objection to dream skepticism is that it assumes that the skeptic is committed to saying she is justified in thinking (or knows) that this might well be a dream. The most radical skeptics (e.g., Sextus), might not be committed to this.

A more moderate skeptic (like my 1% skeptic) can't escape the argument that way, but another way is available. And that is to concede that whatever degree of credence she was initially inclined to assign to the possibility that she is dreaming, on the basis of her assumed empirical evidence and memories of the past, she probably should tweak that credence somewhat to take into account the fact that she can no longer be highly confident about the provenance of that seeming empirical evidence. But unless she somehow discovers new grounds for thinking that it's impossible or hugely unlikely that she is dreaming, this is only partial undercutting -- not grounds for 100% confidence that she is not dreaming. She can still maintain reasonable doubt: Previously she was very confident that she knew that dreams and waking life were hard to tell apart; now she could see going either way on that question.

Consider this case as an analogy. I have a very vivid and realistic seeming-memory of having been told ten minutes ago, by a powerful demon, that in five minutes this demon would flip a coin. If it comes up heads, she will give me a 50% mix of true and false memories about the half hour before and after the coin flip, including about that very conversation; if tails, she won't tamper with my memory. Then she'll walk away and leave me in my office.

Should I trust my seeming-memories of the past half hour, including of that conversation? If I trust those memories, that gives me reason not to trust them. If I don't trust those memories, well that seems hardly less skeptical. Either way, I'm left with substantial doubt. The doubt undercuts its own grounds to some extent, yes, but it doesn't seem epistemically justified to react to that self-undercutting by purging all doubt and resting in perfect confidence that my memories of that conversation are entirely veridical.

This is the heart of the empirical skeptic's dilemma: Either I confidently take my experience at face value or I don't. If I don't confidently take my experience at face value, I am already a skeptic. If I do confidently take my experience at face value, then I discover empirical reasons not to take it confidently at face value after all. Those reasons partly undercut themselves, but that partial undercutting does not then justify shifting back to high confidence as though there were no such grounds for doubt.

(image source)

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "dreams, epistemology, skepticism"
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Feb 2014 20:09
Recently, I've been arguing (e.g., here and here) that we should be skeptical about any general theory of consciousness, whether philosophical or scientific. Here's one way of putting the case.

When we advance a general theory of consciousness, we must do so on some combination of empirical grounds (especially the study of actual conscious systems here on Earth) and armchair reflection (especially thought experiments).

Our empirical grounds are very limited: We have only seen conscious systems as they exist on Earth. To draw, on these grounds, universal conclusions about how any conscious system must be structured would be a reckless leap, if our theories are really supposed to be driven by empirical tests on conscious animals, which could have come out either way. Who knows how those crucial theory-driving experiments would have come out on very differently constructed beings from Andromeda Galaxy?

A truly universal theory of consciousness seems more likely to succeed if it draws broadly from a range of hypothetical cases, abstracting away from empirical details of implementation on Earth. So we must sit in our armchairs. However, armchair reflection about the consciousness of hypothetical beings has two huge shortcomings:

  1. Experts in the field reach very different conclusions when asked to reflect on what sorts of hypothetical beings would be conscious (all the way from panpsychism to views that require highly sophisticated cognitive abilities).
  2. Our judgments about such cases must be grounded in some sort of prior knowledge, such as our experience of beings here on Earth and our developmentally and socially and evolutionarily favored beliefs. And there seems little reason to trust such judgments outside the run of normal cases, for example, about the consciousness or not of large group entities under various conditions.

If you are moved by these concerns, you might think that the appropriate response is to restrict our theory to consciousness as it appears on Earth. But even just thinking about consciousness on Earth drops us into a huge methodological dilemma. If we treat introspective reportability as something close to a necessary condition for consciousness, then we end up with a very sparse view of the distribution of consciousness on Earth. And maybe that's right! But it also seems reasonable to think that consciousness might be possible without introspective reportability, e.g., in dogs and babies. And then it becomes extremely unclear how we determine whether it is present without begging big theoretical questions. How could we possibly determine whether an ant is conscious without begging the question against people with very different views than our own?

Could we forget about non-human animals and babies and restrict our (increasingly less general) theory of consciousness just to adult humans? Even here, I incline toward pessimism, at least for the medium-term future.

One reason is this: I see no near-term way to resolve the question of whether consciousness abundantly outruns attention. I think I can imagine two very different possibilities here. One possibility is that I have constant tactile experience of my feet in my shoes, constant auditory experience of the hum of the refrigerator, etc., but when I'm not attending to such matters, that experience drops out of memory so quickly and is so lightly processed that it is unreportable. Another possibility is that I usually have no tactile experience whatsoever of my feet in my shoes or auditory experience of the hum of the fridge unless these things capture my attention for some reason. These possibilities seem substantively distinct, and it's easy to see how a proponent of one can create a methodological error theory to explain away the judgments of a proponent of the other.

Now maybe there's a way around these problems. Scientists have often found ingenious ways to embarrass earlier naysayers! But still, there's such a huge spread between the best neuroscientific approaches (e.g., Tononi and Dehaene) and such a huge spread between the best philosophical approaches (e.g., Chalmers and Dennett), that it's hard for me to envision a well-justified consensus emerging in my philosophical lifetime.

[HT Scott Bakker, who has been pushing me on these issues.][image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "introspection, metaphilosophy, metaphysi..."
Send by mail Print  Save  Delicious 
Date: Thursday, 27 Feb 2014 12:53
(by Amie L. Thomasson)

There once was a fictional character. And her name was May. May was four years old; or, at least that's what the story said. In fact, she had just been written quite recently "and so in that sense, I am a newborn fictional character" she liked to say. Especially when she was looking for a good excuse to curl up on the rug like a teeny tiny baby, suck her thumb, and bat at things in a homemade play gym.

Well May was a lovely little fictional character, bouncy, fun, very clever, and with an uncanny ability to eat olives. Put as many as you like in front of her, and they'd always be gone by the next page.

But still May was not happy, for although she appreciated the olives, she was lonely. "Why don't I have a mommy or daddy to take care of me?" she asked.

"But you do have a mother. I created you from words and pictures" her author said.

"That's NOT what I mean" the little girl sulked, and she slumped down right in the corner of the page, folding one edge over to hide herself.

"Oh alright then," the author said. And she made her a mother. A mother who loved her more than anything in the world, who taught her to paint and to laugh at herself, who sat on the floor for hours making zoos and block houses and earthquakes to destroy them. And she made her a father. A father with constant love and gentle patience, who taught her to bake banana bread and to play piano and to name every bird in the garden. And they were happy together.

They came to live in a book. A real hardcover book, with full color pictures and shiny pages. And the book came to be on the shelves of a little girl—a four year old girl, as it happens—by the name of Natalie. The various dragons and bears who lived in tatty second hand paperbacks on lower shelves really quite envied them.

Till one day, just before bedtime, Natalie spotted the book, sticking out slightly between a board book about ducklings and something involving a circus. What's this book? She asked. She had never seen it before. "I want to read it now! Can't we read it pleeeaaase?" She asked. "Well, it's a bit late," her mommy said, "but I guess we could read just this one." And they all plumped down on Natalie's fluffy red comforter, and her daddy began to read.

As they closed the book for the night, Natalie's mommy said, "well I'm glad little May got some parents and isn't lonely anymore." "But mooommmy," Natalie protested, "she's not REAL!" Oh yeah, admitted mommy, closing the book gently and turning out the light.

"I'm glad I'm real and not just in a book." said Natalie quietly as she curled up with her blanket.

"So am I, sweetheart," her mommy agreed as she kissed her soft cheek goodnight.

Well once the book was closed little May began to cry. "What does she MEAN I'm not real?" asked May who, like most children, had forgotten those muddled early days after she was first made, those days when she was lonely. Well, her mother explained, we are just characters in a book. We do what our author writes, there’s no more to us than she's given us, and we stay in the world of these pages.

But I want to get out of here! May protested. I want to be really REAL. I want to have toes (for these had never been seen in the pages). I want to know what happened when I was just two (for this had never been spoken of). And I want to go wherever I want to go, not where some author puts me! She railed. And she wept and she struggled and she stewed. Her mother cried a bit too, to see her daughter realizing these sad truths, but her daddy just held her hand.

You know, he said, since were not real, we'll never get sick (see: no sickness is ever mentioned). We'll never bump too hard off a slide. Or get bitten by mosquitoes.

And will no one ever steal the olives out of my lunch box? May wanted to know.

Nope, no one will ever steal the olives out of your lunchbox. Or your vanilla cookies either. And best of all, none of us will ever die—we can stay here together for always, loving each other in this book.

I'm glad we're not real, May decided. And she curled up in a corner of the page, sucking her thumb quietly, and went to sleep.

-----------------------------------------

Extract from "I'm glad I'm not real" by Amie L. Thomasson, from The Philosophy Shop: Ideas, activities and questions to get people, young and old, thinking philosophically. Edited by Peter Worley (c)Peter Worley 2012. ISBN 9781781350492

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "metaphysics, speculative fiction"
Send by mail Print  Save  Delicious 
Date: Friday, 14 Feb 2014 16:16
A "freak observer" or "Boltzmann brain" is a conscious being who did not arise in the normal way on a large, stable planet, but who instead congealed by freak chance out of chaos, due to a low-probability quantum or thermodynamic fluctuation -- a conscious being with rich seemingly sensory experience, rich seeming-memories, and capable of sophisticated thoughts or seeming-thoughts about itself and its position in the universe. By hypothesis, such a being is massively deluded about its past. And since random fluctuations are much likelier to create a relatively small system than a relatively large system, and since a relatively small system (such as a bare brain) amid chaos is doomed to a short existence, most freak observers will swiftly perish.

If certain cosmological theories are true, then almost all conscious systems are freak observers of this sort. Here's one such theory: There is exactly one universe which began with a unique Bang, which contains a finite number of ordinary non-freak observers, and which will eventually become thin chaos, enduring infinitely thereafter in a disorganized state. In any spacetime region there is a miniscule but finite chance of the spontaneous freak formation of any finite organized system, with smaller and less organized systems vastly more likely than larger and more organized systems. Given infinite time, the number of spontaneously formed freak observers will eventually vastly outnumber the normal observers. Whatever specific experiences and evidence I take myself now to have, according to this theory, to any finite degree of precision, there will be an infinite number of randomly generated Eric Schwitzgebel clones who have the same experiences and apparent evidence.

Can I prove that I am not a freak observer by counting "1, 2, 3, still here"? Seemingly no, for two reasons: (1.) By the time I reach "still here" I am relying on my memory of the "1, 2, 3", and the theory says that there will be an infinite number of freak observers with exactly that false memory. (2.) Even if assume knowledge of my continued existence for three seconds, there will be an infinite number of somewhat larger freak observers who congealed simultaneously with a large enough hunk of environment to exist for three seconds, doing that apparent count. If I am such a one, I will very likely perish soon, but it is not guaranteed that I will perish, and if I don't perish and thus conclude that I am not a freak I have ignored the overwhelming base rate of freaks to normal observers.

Suppose that given the physical evidence such a cosmology seems plausible, or some other cosmology in which freak observers vastly outnumber normal observers. Should I conclude I am probably a freak observer? It would be a strange conclusion to draw!

One interesting argument against this conclusion is the cognitive instability argument (Carroll 2010; Davenport & Olum 2010; Crawford 2013): Suppose that my grounds for believing that I am a freak observer are Physical Theory X, which I accept only conditionally upon believing that I have good empirical evidence for Physical Theory X. If I am a freak observer, then, contrary to the initial assumption, I do not have good empirical evidence for Physical Theory X. I have not, for example, despite my contrary impression, actually read any articles about X. If I seem to have good empirical evidence for Physical Theory X, I know already that that evidence is almost certainly misleading or wrongly interpreted -- either I do have the properly-caused body of evidence that I think I have, that is, I am not a freak, and that evidence is misleadingly pointing me to the wrong conclusion about my situation; or I am a freak and I don't have such a body of properly-caused evidence at all.

For this reason, I think it would be irrational to accept a cosmological theory that implies that almost all observers are freak observers and then conclude that therefore I am also a freak observer.

But a lower-confidence conclusion seems to be more cognitively stable. Suppose our best cosmological theory implies that 1% of observers are freaks. I might then accept that there is a non-trivial chance that I am one of the freaks. After all, my best understanding of the universe implies that there are such freaks, and I see no compelling reason to suppose that I couldn't be one of them.

Alternatively, maybe my best evidence should leave me undecided among lots of cosmologies, in some of which I'm a freak and in others of which I'm not. The possibility that I'm a freak undercuts my confidence in the evidence I seem to have for any specific cosmology, but that only adds to my indecision among the possibilities; it doesn't seem to compel elimination of the possibility that I am a freak.

Here's another way to think about it: As I sit here in my office, or seem to, and think about the scope of the cosmos, I find myself inclined to ascribe a non-trivial credence to some sort of very large or infinite cosmology, and also a non-trivial credence to the hypothesis that given enough time freak observers will spontaneously form, and also a non-trivial credence to the possibility that the freaks aren't vastly outnumbered by the normal observers. If I accept this conjunction of views, then it seems to me that I should also assign a bit of credence to the possibility that I am one of the freaks. To do otherwise would seem to commit me to near certainty on some proposition, such as about the relative nucleation rates of freaks vs. environments containing normal observers, that I wouldn't normally think of as something I know with near certainty.

Or maybe I should just take it as an absolutely certain "framework" assumption that I do have the kind of past I think I have, regardless of how many Eric-Schwitzgebelesque freaks the cosmos may contain? I can see how that might be a reasonable stance. But that approach has a dogmatic air that I find foreign.

If I allow that I'm not absolutely 100.0000000000000000000000000000% certain that I'm not a spontaneously formed freak observer, what sort of credence should I assign to the possibility that I am a freak? One in million? One in ten trillion? One in 10^100? I would like to go low! But I'm not sure that it's reasonable for me to go so low, once the possibility occurs to me and I start to consider my reasons pro and con. I'm inclined to think it is vastly less likely that I am a freak observer than that this ticket will win the one-in-ten-million Lotto jackpot -- but given the dubiety of cosmological theories and my inability to really assess them, should I perhaps be considerably less confident than that about my non-freakish position in the cosmos?

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "cosmology, skepticism"
Send by mail Print  Save  Delicious 
Date: Thursday, 06 Feb 2014 15:13
Philosophers sometimes say that knowledge is a norm of assertion -- that a person should assert only what she knows. Since knowing some proposition P is usually taken to imply believing that same proposition P, commitment to a knowledge norm of assertion is generally thought to imply commitment to a belief norm of assertion: A person should assert only what she believes.

What happens, however, if one accepts, as I do, that knowledge that P does not imply belief that P? Can the belief norm be violated as long as the knowledge norm is satisfied? Bracketing, if we can, pragmatic and contextualist concerns (which I normally take quite seriously), is it acceptable to assert something one knows but does not believe?

I'm inclined to think it is.

Consider my favorite case of knowledge without belief (or at least without full, determinate belief), the prejudiced professor case:

Juliet, let’s suppose, is a philosophy professor who racially identifies as white. She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the black students never look bright. When a black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a white or Asian student to do so, even though her black students make insightful comments and submit excellent essays at the same rate as do the others. And so on.
I am inclined to say, in such a case (assuming the details are fleshed out in plausible ways) that Juliet knows that all the races are equally intelligent, but her belief state is muddy and in-betweenish; it's not determinately correct to say that she believes it. Such in-between cases of belief require a more nuanced treatment than is permitted by straightforward ascription or denial of the belief. (I often analogize here to in-betweenish cases of having a personality trait like courage or extraversion.) Juliet determinately knows but does not determinately believe.

You might not accept this description of the case. My view about it is distinctly in the philosophical minority. However, suppose you grant my description. Is Juliet justified in asserting that all the races are equally intelligent, despite her not determinately believing that to be the case?

I'm inclined to think so. She has the evidence, she's taken her stand, she does not err when she asserts the proposition in debate, even if she cannot bring herself quite to live in a way consistent with determinately believing it to be so. However she is inclined spontaneously to respond to the world, the egalitarian proposition reflects her best, most informed judgment. Assertability in this sense tracks knowledge better than it tracks belief. She can properly assert despite not determinately believing.

*************** Objection: "Moore's paradox" is the strangeness of saying things like "It's raining but I don't believe that it's raining". One might object to the above that I now seem to be committed to the assertability of Moore-paradoxical sentences like

(1.) All the races are intellectually equal but I don't (determinately) believe that they are.
Reply: I grant that (1) is not properly assertable in most contexts. Rather, what is properly assertable on my view is something like:
(2.) All the races are intellectually equal, but I accept Schwitzgebel's dispositional approach to belief and it is true in the terms of that theory that I do not determinately believe that all the races are intellectually equal.
The non-assertability of (1) flows from the fact that my dispositional approach to belief is not the standard conception of belief. If my view about belief were to become the standard view, then (1) would become assertable.
Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "belief"
Send by mail Print  Save  Delicious 
Date: Wednesday, 05 Feb 2014 09:02
One reason I'm a fan of MIT Press (the publisher of both of my books) is that for an academic press their prices are very low (my 2011 book is currently $14.21 at Amazon) which means that a broader range of people can afford the book than if it were at another press. Another reason I'm a fan is that MIT has tended to be a leader in exploring new electronic media.

So it's very cool that they've chosen a chapter of my Perplexities of Consciousness for their BITS project, a new enterprise which allows people to electronically buy a portion of an MIT Press book for a low price ($2.99 in this case) and then later, if the reader wants, the whole book for 40% off list price. The chapter they've chosen is "When Your Eyes Are Closed, What Do You See?", which although it is the eighth and final chapter of my book, does not require that the reader know material from the previous chapters -- thus, a reasonable choice for a BIT.

What I'd really love to see down the road is a model where you can buy any selection of pages from a book for a nickel per page.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements, eyes closed, visual exper..."
Send by mail Print  Save  Delicious 
Date: Thursday, 30 Jan 2014 15:35
For a couple of years now, I have been arguing that if materialism is true the United States probably has a stream of conscious experience over and above the conscious experiences of its citizens and residents. As it happens, very few materialist philosophers have taken the possibility seriously enough to discuss it in writing, so part of my strategy in approaching the question has been to email various prominent materialist philosophers to get a sense of whether they thought the U.S. might literally be phenomenally conscious, and if not why not.

To my surprise, about half of my respondents said they did not rule out the possibility. Two of the more interesting objections came from Fred Dretske (my undergrad advisor, now deceased) and Dan Dennett. I detail their objections and my replies in the essay in draft linked above. Although I didn't target him because he is not a materialist, [update 3:33 pm: Dave points out that I actually did target him, though it wasn't in my main batch] David Chalmers also raised an objection about a year ago in a series of emails. The objection has been niggling at me ever since (Dave's objections often have that feature), and I now address it in my updated draft.

The objection is this: The United States might lack consciousness because the complex cognitive capacities of the United States (e.g., to war and spy on its neighbors, to consume and output goods, to monitor space for threatening asteroids, to assimilate new territories, to represent itself as being in a state of economic expansion, etc.) arise largely in virtue of the complex cognitive capacities of the people composing it and only to a small extent in virtue of the functional relationships between the people composing it. Chalmers has emphasized to me that he isn't committed to this view, but I find it worth considering nonetheless, and others have pressed similar concerns.

This objection is not the objection that no conscious being could have conscious subparts (which I discuss in Section 2 of the essay and also here); nor is it the objection that the United States is the wrong type of thing to have conscious states (which I address in Sections 1 and 4). Rather, it's that what's doing the cognitive-functional heavy lifting in guiding the behavior of the U.S. are processes within people rather than the group-level organization.

To see the pull of this objection, consider an extreme example -- a two-seater homunculus. A two-seater homuculus is a being who behaves outwardly like a single intelligent entity but who instead of having a brain has two small people inside who jointly control the being's behavior, communicating with each other through very fast linguistic exchange. Plausibly, such a being has two streams of conscious experience, one for each homunculus, but no additional group-level stream for the system as a whole (unless the conditions for group-level consciousness are weak indeed). Perhaps the United States is somewhat like a two-seater homunculus?

Chalmers's objection seems to depend on something like the following principle: The complex cognitive capacities of a conscious organism (or at least the capacities in virtue of which the organism is conscious) must arise largely in virtue of the functional relationships between the subsystems composing it rather than in virtue of the capacities of its subsystems. If such a principle is to defeat U.S. consciousness, it must be the case both that

(a.) the United States has no such complex capacities that arise largely in virtue of the functional relationships between people, and

(b.) no conscious organism could have the requisite sort of complex capacities largely in virtue of the capacities of its subsystems.

Contra (a): This claim is difficult to assess, but being a strong, empirical negative existential (the U.S. has not even one such capacity), it seems a risky bet unless we can find solid empirical grounds for it.

Contra (b): This claim is even bolder. Consider a rabbit's ability to swiftly visually detect a snake. This complex cognitive capacity, presumably an important contributor to rabbit visual consciousness, might exist largely in virtue of the functional organization of the rabbit's visual subsystems, with the results of that processing then communicated to the organism as a whole, precipitating further reactions. Indeed turning (b) almost on its head, some models of human consciousness treat subsystem-driven processing as the normal case: The bulk of our cognitive work is done by subsystems, who cooperate by feeding their results into a global workspace or who compete for fame or control. So grant (a) for sake of argument: The relevant cognitive work of the United States is done largely within individual subsystems (people or groups of people) who then communicate their results across the entity as a whole, competing for fame and control via complex patterns of looping feedback. At the very abstract level of description relevant to Chalmers's expressed (but let me re-emphasize, not definitively endorsed) objection, such an organization might not be so different from the actual organization of the human mind. And it is of course much bolder to commit to the further view implied by (b), that no conscious system could possibly be organized in such a subsystem-driven way. It's hard to see what would justify such a claim.

The two-seater homunculus is strikingly different from a rabbit or human system (or even a Betelguesian beehead) because the communication is only between two sub-entities, at a low information rate; but the U.S. is composed of about 300,000,000 sub-entities whose informational exchange is massive, so the case is not similar enough to justify transferring intuitions from the one to the other.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "metaphysics, self-knowledge, USA conscio..."
Send by mail Print  Save  Delicious 
Date: Thursday, 23 Jan 2014 15:24
... which is a recurrent topic of my research, as regular readers of this blog will know.

This new paper, co-authored with Joshua Rust, summarizes our work on the topic to date and offers a quantitative meta-analysis that supports our overall finding that professors of ethics behave neither morally better nor morally worse overall than do philosophers not specializing in ethics.

You might find it entirely unsurprising that ethicists should behave no differently than other professors. If you do find it unsurprising (Josh and I don't), you might still be interested in looking at another of Josh's and my papers, in which we think through some of the theoretical implications of this finding.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "ethics professors, moral psychology"
Send by mail Print  Save  Delicious 
Date: Tuesday, 21 Jan 2014 15:20
Slowly catching up on science fiction classics, reading Lem's Solaris, I'm struck by how the narrator, Kris, escapes a skeptical quandary. Worried that his sensory experiences might be completely delusional, Kris concocts the following empirical test:

I instructed the satellite to give me the figure of the galactic meridians it was traversing at 22-second intervals while orbiting Solaris, and I specified an answer to five decimal points.

Then I sat and waited for the reply. Ten minutes later, it arrived. I tore off the strip of freshly printed paper and hid it in a drawer, taking care not to look at it.... Then I sat down to work out for myself the answer to the question I had posed. For an hour or more, I integrated the equations....

If the figures obtained from the satellite were simply the product of my deranged mind, they could not possibly coincide with [my hand calculations]. My brain might be unhinged, but it could not conceivably complete with the Station's giant computer and secretly perform calculations requiring several months' work. Therefore if the figures corresponded, it would follow that the Station's computer really existed, that I had really used it, and that I was not delirious (1961/1970, p. 50-51).

Except in detail, Kris's test closely resembles an experiment Alan Moore and I have used in our attempt to empirically establish the existence of the external world (full paper in draft here).

Kris is hasty in concluding from this experiment that he must have used an actually existing computer. Kris might, for example, have been victim of a deceiver with great computational powers, who can give him the meridians within ten minutes of his asking. And Kris would have done better, I think, to have looked at the readout before having done his own calculations. By not looking until the end, he leaves open the possibility that he delusively creates the figures supposedly from the satellite only after he has derived the correct answers himself. Assuming he can trust his memory and arithmetical abilities for at least a short duration (and if not, he's really screwed), Kris should look at the satellite's figures first, holding them steady before his mind, while he confirms by hand that the numbers make mathematical sense.

Increasingly, I think the greatest science fiction writers are also philosophers. Exploring the limits of technological possibility inevitably involves confronting the central issues of metaphysics, epistemology, and human value.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "science fiction, skepticism, speculative..."
Send by mail Print  Save  Delicious 
Date: Wednesday, 15 Jan 2014 13:28
I'm thinking (again) about beeping people during aesthetic experiences. The idea is this. Someone is reading a story, or watching a play, or listening to music. She has been told in advance that a beep will sound at some unexpected time, and when the beep sounds, she is to immediately stop attending to the book, play, or whatever, and note what was in her stream of experience at the last undisturbed moment before the beep, as best she can tell. (See Hurlburt 2011 for extensive discussion of such "experience sampling" methods.)

I've posted about this issue before; and although professional philosophy talks aren't paradigmatic examples of aesthetic performances, I have beeped people during some of my talks. One striking result: People spend lots of time thinking about things other than the explicit content of the performance -- for example, thinking instead about needing to go to the bathroom, or a sports bet they just won, or the weird color of an advertising flyer. And I'd bet Nutcracker audiences are similarly scatterbrained. (See also Schooler, Reichle, and Halpern 2004; Schubert, Vincs, and Stevens 2013.)

(image source: *)

But I also get the sense that if I pause, I can gather the audience up. A brief pause is commanding -- in music (e.g. Roxanne), in film -- but especially in a live performance like a talk. Partly, I suspect this is due to contrast with previous noise levels, but also it seems to raise curiosity about what's next -- a topic change, a point of emphasis, some unplanned piece of human behavior. (How interesting it is when the speaker drops his cup! -- much more interesting, usually, in a sad and wonderfully primate way, than the talk itself.)

I picture people's conscious attention coming in waves. We launch out together reasonably well focused, but soon people start drifting their various directions. The speaker pauses or does something else that draws attention, and that gathers everyone briefly back together. Soon the audience is off drifting again.

We could study this with beepers. We could see if I'm right about pauses. We could see what parts of performance tend to draw people back from their wanderings and what parts of performance tend to escape conscious attention. We could see how immersive a performance is (in one sense of "immersive") by seeing how frequently people report being off topic vs. on a tangentially related topic vs. being focused on the immediate content of the performance. We could vastly improve our understanding of the audience experience. New avenues for criticism could open up. Knowing how to capture and manipulate the waves could help writers and performers create a performance more in line with their aesthetic goals. Maybe artists could learn to want waves and gatherings of a certain sort, adding a new dimension to their aesthetic goals.

As far as I can tell, no one has ever done a systematic experience sampling study during aesthetic experience that explores these issues. It's time.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "aesthetics, psychological methods, strea..."
Send by mail Print  Save  Delicious 
Date: Friday, 10 Jan 2014 10:03
I am a passenger in a jumbo jet that is descending through turbulent night fog into New York City. I'm not usually nervous about flying, but the turbulence is getting to me. I know that the odds of dying in a jet crash with a major US airline are well below one in a million, but descent in difficult weather conditions is among the most dangerous parts of flight -- so maybe I should estimate my odds of death in the next few minutes as about one in a million or one in ten million? I can't say those are odds I'm entirely happy about.

But then I think: Maybe some radically skeptical scenario is true. Maybe, for example, I'm a short-term sim -- an artificial, computerized being in a small world, doomed soon to be shut down or deleted. I don't think that is at all likely, but I don't entirely rule it out. I have about a 1% credence that some radically skeptical scenario or other is true, and about 0.1% credence, specifically, that I'm in a short-term sim. In a substantial portion of these radically skeptical scenarios, my life will be over soon. So my credence that my life will soon end for some skeptical-scenario type reason is maybe about one in a thousand or one in ten thousand -- orders of magnitude higher than my credence that my life will soon end for the ordinary-plane-crash type of reason.

Still, the plane-crash possibility worries me more than the skeptical possibility.

Does the fact that these skeptical reflections leave me emotionally cold show that I don't really, "deep down", have even a one-in-a-million credence in at least the imminent-death versions of the skeptical scenarios? Now maybe I shouldn't worry about those scenarios even if I truly assign a non-trivial credence to them. After all, there's nothing I can do about them, no action that I can reasonably take in light of them. I can't, for example, buy sim-insurance. But if that's why the scenarios leave me unmoved, the same is true about the descending plane. There's nothing I can do about the fog; I need to just sit tight. As a general matter helplessness doesn't eliminate anxiety.

Here my interest in radical skepticism intersects another of my interests, the nature of belief. What would be involved in really believing that there is a non-trivial chance that one will soon die because some radically skeptical scenario is true? Does genuine belief only require saying these things to oneself, with apparent sincerity, and thinking that one accepts them? Or do they need to get into your gut?

My view is that it's an in-between case. To believe, on my account, is to have a certain dispositional profile -- to be disposed to reason, and to act and react, both inwardly and outwardly, as ordinary people would expect someone with that belief to do, given their other related attitudes. So, for example, to believe that something carries a 1/10,000 risk of death is in part to be disposed sincerely to say it does and to draw conclusions from that fact (e.g., that it's riskier than something with a 1/1,000,000 risk of death); but it is also to have certain emotional reactions, to spontaneously draw upon it in one's everyday thinking, and to guide one's actions in light of it. I match the dispositional profile, to some extent, for believing there's a small but non-trivial chance I might soon die for skeptical-scenario-type reasons -- for example, I will sincerely say this when reflecting in my armchair -- but in other important ways I seem not to match the relevant dispositional profile.

It is not at all uncommon for people intellectually to accept certain propositions -- for example, that their marriage is one of the most valuable things in their lives, or that it's more important for their children to be happy than to get good grades, or that custodians deserve as much respect as professors -- while in their emotional reactions and spontaneous thinking, they do not very closely match the dispositional profile constitutive of believing such things. I have argued that this is one important way in which we can occupy the messy middle space between being accurately describable as believing something and being accurately describable as failing to believe it. My own low-but-not-negligible credence in radically skeptical scenarios is something like this, I suppose.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "belief, skepticism"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader