• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)


Date: Wednesday, 23 Jul 2014 16:08
Might there be excellent reasons to embrace radical skepticism, of which we are entirely unaware?

You know brain-in-a-vat skepticism -- the view that maybe last night while I was sleeping, alien superscientists removed my brain, envatted it, and are now stimulating it to create the false impression that I'm still living a normal life. I see no reason to regard that scenario as at all likely. Somewhat more likely, I argue -- not very likely, but I think reasonably drawing a wee smidgen of doubt -- are dream skepticism (might I now be asleep and dreaming?), simulation skepticism (might I be an artificial intelligence living in a small, simulated world?), and cosmological skepticism (might the cosmos in general, or my position in it, be radically different than I think, e.g., might I be a Boltzmann brain?).

"1% skepticism", as I define it, is the view that it's reasonable for me to assign about a 1% credence to the possibility that I am actually now enduring some radically skeptical scenario of this sort (and thus about a 99% credence in non-skeptical realism, the view that the world is more or less how I think it is).

Now, how do I arrive at this "about 1%" skeptical credence? Although the only skeptical possibilities to which I am inclined to assign non-trivial credence are the three just mentioned (dream, simulation, and cosmological), it also seems reasonable for me to reserve a bit of my credence space, a bit of room for doubt, for the possibility that there is some skeptical scenario that I haven't yet considered, or that I've considered but dismissed and should take more seriously than I do. I'll call this wildcard skepticism. It's a kind of meta-level doubt. It's a recognition of the possibility that I might be underappreciating the skeptical possibilities. This recognition, this wildcard skepticism, should slightly increase my credence that I am currently in a radically skeptical scenario.

You might object that I could equally well be over-estimating the skeptical possibilities, and that in recognition of that possibility, I should slightly decrease my credence that I am currently in a radically skeptical scenario; and thus the possibilities of over- and underestimation should cancel out. I do grant that I might as easily be overestimating as underestimating the skeptical possibilities. But over- and underestimation do not normally cancel out in the way this objection supposes. Near confidence ceilings (my 99% credence in non-skeptical realism), meta-level doubt should tend overall to shift one's credence down.

To see this, consider a cartoon case. Suppose I would ordinarily have a 99% credence that it won't rain tomorrow afternoon (hey, it's July in southern California), but I also know one further thing about my situation: There's a 50% chance that God has set things up so that from now on the weather will always be whatever I think is most likely, and there's a 50% chance that God has set things up so that whenever I have an opinion about the weather he'll flip a coin to make it only 50% likely that I'm right. In other words, there's a meta-level reason to think that my 99% credence might be an underestimation of the conformity of my opinions to reality or equally well might be an overestimation. What should my final credence in sunshine tomorrow be? Well, 50% times 100% (God will make it sunny for me) plus 50% times 50% (God will flip the coin) = 75%. In meta-level doubt, the down weighs more than the up.

Consider the history of skepticism. In Descartes's day, a red-blooded skeptic might have reasonably invested a smidgen more doubt in the possibility that she was being deceived by a demon than it would be reasonable to invest in that possibility today, given the advance of a science that leaves little room for demons. On the other hand, a skeptic in that era could not even have conceived of the possibility that she might be an artificial intelligence inside a computer simulation. It would be epistemically unfair to such a skeptic to call her irrational for not considering specific scenarios beyond her society's conceptual ken, but it would not be epistemically unfair to think she should recognize that given her limited conceptual resources and limited understanding of the universe, she might be underestimating the range of possible skeptical scenarios.

So now us too. That's wildcard skepticism.

[image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "epistemology"
Send by mail Print  Save  Delicious 
Date: Wednesday, 23 Jul 2014 11:15
Eric Kaplan, who overlapped with me in grad school at Berkeley but who is now much more famous as a comedy writer for Big Bang Theory, Futurama, and several other shows, has been cooking up weird philosophical-comical blog posts since March at his Wordpress blog here.

Check it out!

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Thursday, 17 Jul 2014 08:54
One of the most prominent theories of consciousness is Guilio Tononi's Integrated Information Theory. The theory is elegant and interesting, if a bit strange. Strangeness is not necessarily a defeater if, as I argue, something strange must be true about consciousness. One of its stranger features is what Tononi calls the Exclusion Postulate. The Exclusion Postulate appears to render the presence or absence of consciousness almost irrelevant to a system's behavior.

Here's one statement of the Exclusion Postulate:

The conceptual structure specified by the system must be singular: the one that is maximally irreducible (Φ max). That is, there can be no superposition of conceptual structures over elements and spatio-temporal grain. The system of mechanisms that generates a maximally irreducible conceptual structure is called a complex... complexes cannot overlap (Tononi & Koch 2014, p. 5).
The basic idea here is that conscious systems cannot nest or overlap. Whenever two information-integrating systems share any parts, consciousness attaches to the one that is the most informationally integrated, and the other system is not conscious -- and this applies regardless of temporal grain.

The principle is appealing in a certain way. There seem to be lots of information-integrating subsystems in the human brain; if we deny exclusion, we face the possibility that the human mind contains many different nesting and overlapping conscious streams. (And we can tell by introspection that this is not so -- or can we?) Also, groups of people integrate information in social networks, and it seems bizarre to suppose that groups of people might have conscious experience over and above the individual conscious experiences of the members of the groups (though see my recent work on the possibility that the United States is conscious). So the Exclusion Postulate allows Integrated Information Theory to dodge what might otherwise be some strange-seeming implications. But I'd suggest that there is a major price to pay: the near epiphenomenality of consciousness.

Consider an electoral system that works like this: On Day 0, ten million people vote yes/no on 20 different ballot measures. On Day 1, each of those ten million people gets the breakdown of exactly how many people voted yes on each measure. If we want to keep the system running, we can have a new election every day and individual voters can be influenced in their Day N+1 votes by the Day N results (via their own internal information integrating systems, which are subparts of the larger social system). Surely this is society-level information integration if anything is. Now according to the Exclusion Postulate, whether the individual people are conscious or instead the societal system is conscious will depend on how much information is integrated at the person level vs. the societal level. Since "greater than" is sharply dichotomous, there must be an exact point at which societal-level information integration exceeds the person-level information integration. Tononi and Koch appear to accept a version of this idea in 2014, endnote xii [draft of 26 May 2014]. As soon as this crucial point is reached, all the individual people in the system will suddenly lose consciousness. However, there is no reason to think that this sudden loss of consciousness would have any appreciable effect on their behavior. All their interior networks and local outputs might continue to operate in virtually the same way, locally inputting and outputting very much as before. The only difference might be that individual people hear back about X+1 votes on the Y ballot measures instead of X votes. (X and Y here can be arbitrarily large, to ensure sufficient informational flow between individuals and the system as a whole. We can also allow individuals to share opinions via widely-read social networks, if that increases information integration.) Tononi offers no reason to think that a small threshold-crossing increase in the amount of integrated information (Φ) at the societal level would profoundly influence the lower-level behavior of individuals. Φ is just a summary number that falls out mathematically from the behavioral interactions of the individual nodes in the network; it is not some additional thing with direct causal power to affect the behavior of those nodes.

I can make the point more vivid. Suppose that the highest-level Φ in the system belongs to Jamie. Jamie has a Φ of X. The societal system as a whole has a Φ of X-1. The highest-Φ individual person other than Jamie has a Φ of X-2. Because Jamie's Φ is higher than the societal system's, the societal system is not a conscious complex. Because the societal system is not a conscious complex, all those other individual people with Φ of X-2 or less can be conscious without violating the Exclusion Postulate. But Tononi holds that a person's Φ can vary over the course of the day -- declining in sleep, for example. So suppose Jamie goes to sleep. Now the societal system has the highest Φ and no individual human being in the system is conscious. Now Jamie wakes and suddenly everyone is conscious again! This might happen even if most or all of the people in the society have no knowledge of whether Jamie is asleep or awake and exhibit no changes in their behavior, including in their self-reports of consciousness.

More abstractly, if you are familiar with Tononi's node-network pictures, imagine two very similar largish systems, both containing a largish subsystem. In one of the two systems, the Φ of the whole system is slightly less than that of the subsystem. In the other, the Φ of the whole system is slightly more. The node-by-node input-output functioning of the subsystem might be virtually identical in the two cases, but in the first case, it would have consciousness -- maybe even a huge amount of consciousness if it's large and well-integrated enough! -- and in the other case it would have none at all. So its consciousness or lack thereof would be virtually irrelevant to its functioning.

It doesn't seem to me that this is a result that Tononi would or should want. If Tononi wants consciousness to matter, given the Exclusion Postulate, he needs to show why slight changes of Φ, up or down at the higher level, would reliably cause major changes in the behavior of the subsystems whenever the Φ(max) threshold is crossed at the higher level. There seems to be no mechanism that ensures this.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "stream of experience"
Send by mail Print  Save  Delicious 
Date: Thursday, 10 Jul 2014 12:21
I'm in Florida with glitchy internet and a 102-degree fever, so now seems like a good day to fall back on the old blogger's privilege of a repost from the past (Sept 15, 2009).

--------------------------------

Usually, philosophy is advocacy. Sometimes it's disruption without a positive thesis in mind. More rarely, it's confession.

The aim of the confessional philosopher is not the same as that of someone who confesses to a spouse or priest, nor quite the same (though perhaps closer) as that of a confessional poet. It is rather this: to display oneself as a model of a certain sort of thinking, while not necessarily endorsing that style of thinking or the conclusions that flow from it. Confessional philosophy tends to center on skepticism and sin.

Consider, in Augustine's Confessions the famous discussion of stealing pears, wherein Augustine displays the sinful pattern of his youthful mind. Augustine's aim is not so much, it seems to me, to advocate a certain position (such as that sinful thoughts tend to take such-and-such a form) as to offer the episode for contemplation by others, with no pre-packaged conclusion, and perhaps also to induce humility in both the reader and himself. He offers an analysis of his motives -- that he was trying to simulate freedom by getting away with something forbidden (which would fit with his general theory of sin, that it involves trying to possess something that can only be given by god) -- but then he undercuts that analysis by noting that he would definitely not have stolen the pears alone. Was it then that he valued the camraderie of his sinful friends? He rejects that explanation also -- "that gang-mentality too was a nothing" -- and after waffling over various possibilities he concludes "It was a seduction of the mind hard to understand.... Who can unravel this most snarled, knotty tangle?" (4th c. CE/1997, p. 72-73)

Descartes's Meditations, especially the first two, are presented as confessional -- perhaps partly to display an actual pattern in his past thinking, but perhaps also partly as a pose. Here we see or seem to see the struggles and confusions of a man bent on finding a secure foundation for his thought. Hume's skeptical conclusion to Book One of his Treatise seems to me more genuinely confessional, when he asks how he can dare to "venture upon such bold enterprizes when beside those numberless infirmities peculiar to myself, I find so many which are common to human nature" (1739/1978, p. 265). "The intense view of these manifold contradictions and imperfections in human reason has so wrought upon me, and heated my brain, that I am ready to reject all belief and reasoning.... I dine, I play a game of back-gammon, I converse, and am merry with my friends; and when after three or four hours' amusement I wou'd return to these speculations, they appear so cold, and strain'd, and ridiculous, that I cannot find in my heart to enter them and farther (p. 268-269). We see how the skeptic writhes. Hume displays his pattern of skeptical thought, but offers no way out, nor chooses between embracing his skeptical arguments and rejecting them. Nonetheless, in books two and three he's back in the business of philosophical argumentation.

Generally, it's better to offer a tight, polished exposition or argument than to display one's thoughts, errors, and uncertainties. That partly explains the rarity of confessional philosophy. But sometimes, no model of error or uncertainty will serve better than oneself.

[for some discussion, see the comments section of the original post]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "metaphilosophy"
Send by mail Print  Save  Delicious 
Date: Monday, 30 Jun 2014 22:25
A couple of months ago, I had some great fun chatting with Richard Brown and Pete Mandik at SpaceTimeMind. Pete has now edited our conversation into two podcasts in their engaging, energetic style:

Part One: Death and Logic

Part Two: Alien and Machine Minds

The episodes are free-standing, so if the topic of Part Two interests you more, feel free to skip straight to it. There will be a few quick references back to our Part One discussion of modality and hypotheticals, but nothing essential.

Although I think Part Two is a very interesting conversation, I do have one regret about it: It took me so long to gather Richard's view about alien consciousness that I didn't manage to articulate very well my reasons for disagreeing. Something early in the conversation led me to think that Richard was allowing that probably there are (somewhere in the wide, wide universe) aliens constructed very differently from us, without brains, who have highly sophisticated behavior -- behavior as sophisticated as our own -- and that his view is that such beings have no conscious experience. By the end of the episode, it became clear to me that his view, instead, is that there probably aren't such beings (but if there were, we would have good reason to regard them as conscious). He offered empirical evidence for this conclusion: that all beings on Earth that are capable of highly sophisticated behavior have brains like ours.

If I had understood his view earlier in the conversation, I might have offered him something like this reply:

(1.) Another possible explanation for the fact that all (or most?) highly intelligent Earthlings have brains structured like ours is that we share ancestry. It remains open that in a very different evolutionary context, drawing upon different phylogenetic resources, a very different set of structures might be able to ground highly intelligent (e.g. sophisticated linguistic, technology-building) behavior.

(2.) Empirical evidence on Earth suggests that at least moderately complicated systems can be designed with very different material structures (e.g., gas vs. battery cars, magnetic tape drives vs. laser drives; insect locomotion vs. human locomotion). I see no reason not to extrapolate such potential diversity to more complex cognitive systems.

(3.) If the universe is vast enough -- maybe even infinite, as many cosmologists now think -- then even extremely low probability events and systems will be actualized somewhere.

Anyhow, Richard and Pete's podcasts have a great energy and humor, and they dive fearlessly into big-picture issues in philosophy of mind. I highly recommend their podcasts.

(For Splintered Mind readers more interested in moral psychology, I recommend the similarly fun and fearless Very Bad Wizards podcast with David Pizarro and (former Splintered Mind guest blogger) Tamler Sommers.)

Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Monday, 23 Jun 2014 19:17
Oh, when the saints go marching inOh, when the saints go marching inLord, I want to be in that numberWhen the saints go marching in.
No. No you don't, Louis. Not really.

If you want to be a saint, dear reader, or the secular equivalent, then you know what to do: Abandon those selfish pleasures, give your life over to the best cause you know (or if not a single great cause then a multitude of small ones) -- all your money, all your time. Maybe you'll misfire, but at least we'll see you trying. But I don't think we see you trying.

Closer to you what you really want, I suspect, is this: Grab whatever pleasures you can here on Earth consistent with just squeaking through the pearly gates. More secularly: Be good enough to meet some threshold, but not better, not a full-on saint, not at the cost of your cappuccino and car and easy Sundays. Aim to be just a little bit better, maybe, in your own estimation, than your neighbor.

Here's where philosophical moral reflection can come in very handy!

As regular readers will know, Joshua Rust and I have done a number of studies -- eighteen different measures in all -- consistently finding that professors of ethics behave no morally better than do socially similar comparison groups. These findings create a challenge for what we call the booster view of philosophical moral reflection. On the booster view, philosophical moral reflection reveals moral truths, which the person is then motivated to act on, thereby becoming a better person. Versions of the booster view were common in both the Eastern and the Western philosophical traditions until the 19th century, at least as a normative aim for the discipline: From Confucius and Socrates through at least Wang Yangming and Kant, philosophy done right was held to be morally improving.

Now, there are a variety of ways to duck this conclusion: Maybe philosophical ethics neither does nor should have any practical relevance to the philosophers expert in it; or maybe most ethics professors are actually philosophizing badly; or.... But what I'll call the calibration view is, I think, among the more interesting possibilities. On the calibration view, the proper role of philosophical moral theorizing is not moral self-improvement but rather more precisely targeting the (possibly quite mediocre) moral level you're aiming for. This could often involve consciously deciding to act morally worse.

Consider moral licensing in social psychology and behavioral economics. When people do a good deed, they then seem to behave worse in follow-up measures than people who had no opportunity to do a good deed first. One possible explanation is something like calibration: You want to be only so good and not more. A unusually good deed inflates you past your moral target; you can adjust back down by acting a bit jerkishly later.

Why engage in philosophical moral reflection, then? To see if you're on target. Are you acting more jerkishly than you'd like? Seems worth figuring out. Or maybe, instead, are you really behaving too much like a sweetheart/sucker/do-gooder and really you would feel okay taking more goodies for yourself? That could be worth figuring out, too. Do I really need to give X amount to charity to be the not-too-bad person I'd like to think I am? Could I maybe even give less? Do I really need to serve again on such-and-such worthwhile-but-boring committee, or to be a vegetarian, or do such-and-such chore rather than pushing it off on my wife? Sometimes yes, sometimes no. When the answer is no, my applied philosophical moral insight will lead me to behave morally worse than I otherwise would have, in full knowledge that this is what I'm doing -- not because I'm a skeptic about morality but because I have a clear-eyed vision of how to achieve exactly my own low moral standards and nothing more.

If this is right, then two further things might follow.

First, if calibration is relative to peers rather than absolute, then embracing more stringent moral norms might not lead to improvements in moral behavior in line with those more stringent norms. If one's peers aren't living up to those standards, one is no worse relative to them if one also declines to do so. This could explain the cheeseburger ethicist phenomenon -- the phenomenon of ethicists tending to embrace stringent moral norms (such as that eating meat is morally bad) while not being especially prone to act in accord with those stringent norms.

Second, if one is skilled at self-serving rationalization, then attempts at calibration might tend to misfire toward the low side, leading one on average away from morality. The motivated, toxic rationalizer can deploy her philosophical tools to falsely convince herself that although X would be morally good (e.g., not blowing off responsibilities, lending a helping hand) it's really not required to meet the mediocre standards she sets herself and the mediocre behavior she sees in her peers. But in fact, she's fooling herself and going even lower than she thinks. When professional ethicists behave in crappy ways, such mis-aimed low-calibration rationalizing is, I suspect, often exactly what's going on.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "ethics professors, moral psychology, psy..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 17 Jun 2014 11:05
Every year's end at UC Riverside, the philosophy faculty meet for three hours "to discuss the graduate students". Back in the 1990s when I was a grad student, I seem to recall the Berkeley faculty doing the same thing. The practice appears to be fairly widespread. After years of feeling somewhat uncomfortable with it, I've tentatively decided I'm opposed. I'd be interested to hear from others with positive or negative views about it.

Now, there are some good things about these year-end meetings. Let's start with those.

At UCR, the formal purpose of the meeting is to give general faculty input to the graduate advisor, who can use that input to help her advising. The idea is that if the faculty as a whole think that a student is doing well and on track, the graduate advisor can communicate that encouraging news to the student; and also, when there are opportunities for awards and fellowships, the graduate advisor can consider those highly regarded students as candidates. And if the faculty as a whole think that a student is struggling, the faculty can diagnose the student's weaknesses and help the graduate advisor give the student advice that might help the student improve. Hypothetical examples (not direct quotes): "Some faculty were concerned about your inconsistent attendance at seminar meetings." "The sense of the faculty is that while you have considerable promise, your writing would be improved if you were more charitable toward the views of philosophers you disagree with."

Other benefits are these: It helps the faculty gain a sense of the various graduate students and how they are doing, presumably a good thing. If a student has struggled in one of your classes but seems to be well regarded by other faculty, that can help you see the student in a better light. It's an opportunity to correct misapprehensions. In the rare case of a student with very serious problems (e.g., mental health issues), it can sometimes be useful for the faculty as a whole to be aware of those issues.

But in my mind, all of those advantages are outweighed by the tendency of these discussions to create a culture in which there's a generally accepted consensus opinion about which students are doing well and which students are not doing so well. I would prefer, and I think for good reason, to look at the graduate students in my seminar the first day, or to look at a graduate student who asks me to be on her dissertation committee, without the burden of knowing what the other faculty think about her. It's widely accepted in educational psychology that teachers' initial impressions about which students are likely to succeed and fail have a substantial influence on student performance (the Pygmalion Effect). I want each student to meet each professor with a chance to make a new first impression. Sometimes students struggle early but then end up doing a terrific job. Within reason, we should do what we can to give students the chance to leave early poor performance behind them, rather than reiterate and generally communicate a negative perception (especially if that negative perception might partly be grounded in implicit bias or in vague impressions about who "seems smart"). Also, some students will have conflicts with some of their professors, either due to personality differences or due to differences in philosophical style or interests, and it's somewhat unfair to such students for a professor to have a platform to communicate a negative opinion without the student's having a similar platform.

I don't want to give the impression that these faculty meetings are about bad-mouthing students. At UCR, the opposite is closer to the truth. Faculty are eager to pipe in with praise for the students who have done well in their courses, and negative remarks are usually couched very carefully and moderately. We like our students and we want them to do well! The UCR Philosophy Department has a reputation for being good to its graduate students -- a reputation which is, in my biased view, well deserved. (This makes me somewhat hesitant to express my concerns about these year-end meetings, out of fear that my remarks will be misinterpreted.) But despite the faculty's evident well-meaning concern for, and praise of, and only muted criticism of, our graduate students in these year-end meetings, I retain my concerns. I imagine the situation is considerably worse, and maybe even seriously morally problematic, at departments with toxic faculty-student relations.

What's to be done instead?

One possibility is that the graduate advisor get input privately from the other faculty (either face to face or by email), in light of which she can give feedback to her advisees. In fact, private communication might be epistemically better, since communicating opinions independently, rather than in a group context, will presumably reduce the problematic human tendency toward groupthink -- though there's also the disadvantage that private input is less subject to correction, and perhaps (depending on the interpersonal dynamics) less likely to be thoughtfully restrained, than comments made in a faculty meeting.

Another possibility is to drop the goal of having the faculty attempt an overall summary assessment of the quality of the students. For awards and fellowships, early-career students can be assessed based on grades and timely completion of requirements. And advanced students can be nominated for awards and fellowships directly by their supervising faculty without the filter of impressions that other faculty might have of that student based on the student's coursework from years ago. And students can, and presumably do, hear feedback from individual faculty separately, a practice that can be further encouraged.

As I mentioned, my opinion is only tentative and I'd be interested to hear others' impressions. Please, however, no comments that reveal the identity of particular people.

[image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "professional issues in philosophy"
Send by mail Print  Save  Delicious 
Date: Thursday, 12 Jun 2014 10:12
The Philosophers' Carnival, as you probably know, posts links to selected posts from around the blogosphere, chosen and hosted by a different blogger every month. Since philosophers are basically just children in grown-up bodies (as a Gopnik student, I intend this as flattery), I use a playground theme.

The Philosophy of Mind Sandpit:I charge into the sandpit. There's David Papineau with his cricket bat staring at me, incredibly focused -- but why does a batter need to be focused if batting is just reflex responsiveness? There must be something more. But we don't know what it is, says R. Scott Bakker, most of whose Three Pound Brain is, he admits, a mystery to him. We're all blind (to the machinery of our cognitive activity) but we're blind to this blindness, and so invent dualist ontologies. Why am I digging here, then? I don't know. Why do I believe he might be wrong? I don't know that either! Scott agrees: I have no idea why I believe he might be wrong. But at least, says Wolfgang Schwarz, my disbelief is very fine-grained, you know, like this sand right here.

The Curving Tunnel of Logic and Language:Into the darkness we go, with Jason Zarri's fuzzy argument for crisp negation. I seem to be turned around, in a half-true circle! Worse still, I seem to be stuck with a correspondence theory of truth, since Tristan Haze is telling me that my projective-based skepticism about facts is itself a projective-fallacy. Oy, this is dizzier than a whirligig! I try to get out of the tunnel, but here comes Eli Sennesh with two boxes and a nearly-omniscient demon and he's trying to get me the million dollars instead of the thousand I thought I knew I was rationally doomed to.

The Epistemic Slide: Hi, Richard Chappell! Would you like to play this little non-normative game with me, called "seeking the truth"? No? You say that my continued attachment to such a game is arbitrary by my own lights? Wah! Good thing I don't believe that your criticism has any objective normative merit. La-la-la. Meanwhile, Ralph Wedgwood from Certain Doubts is trying to get things -- pieces of knowledge, or is it gum? -- to adhere to me, as long as they adhere in the sense that if and only if the case were sufficiently similar with respect to what makes it rational for me to believe P1 in C1 would I also believe P2 in C2. Good thing Richard Pettigrew has given me a metric for determining how inaccurate my total doxastic state is!

The Moral Teeter-Totter: Look over there! Jonny Pugh is bouncing up and down, tip, don't tip, tip, don't tip -- I think he might tip right over on the question of whether new technologies that make the option to tip more salient will and should change the culture of tipping. Stacey Goguen at Feminist Philosophers has a nice compilation of recent reflections on the ups and downs of "trigger warnings" in the classroom. And now here's Alexander Pruss telling me that intentionally making babies is morally wrong because I can't have any specific baby's good in mind and I shouldn't make a baby for reasons that don't include the specific baby's own good. Fine with me! Making babies is gross. And if some of us kids do it anyway, it was only by accident, when we were playing doctor.

The Philosophy of Science Picnic Table:Ah, there's Scott Aaronson, looking skeptically at the consciousness sandwich Guilio Tononi gave him. Evidently, Guilio told him the moon is made of peanut butter. But Dick Dorkins at Genotopia isn't worried. In fact, he's pleased that he finally has really scientifically solid evidence, that the scientists themselves (but not the Wall Street Journal) are too wimpy to embrace, that his British marmite-and-lard is superior to the tawnier sandwiches of people from more southerly continents or subcontinents or whatever they are. (Do I see his tongue in his cheek?)

The Historical Jungle Gym:Lunch is over. Time to climb around and get sick! Barry Stocker at NewAPPS is on top of the jungle gym, wondering why more people aren't thinking about the ancient skeptic Sextus Empiricus as a virtue ethicist. I don't know! A dubious proposition. But I'm at peace with that.

Fingerpainting Aesthetics on the Playground Walls:See that familiar avian aesthetician over there, drawing pictures of Christopher Nolan's cinematic femmes fatales? They might not be what they seem! Wait, does that woman have two faces?

Metaphilosophical/Issues-in-the-Profession Party Poopers:Look, I just want to pick a side, say something that makes sense, and stop, okay John Holbo. All this thinking is too hard. So don't try to diagnose why all the kids around here are such bad philosophical writers. It's because Aunt Flo can't buy me enough electric blue Gogurt on minimum wage. And here is Eric Schliesser, criticizing poor Slavoj Zizek just for telling his students "if you don’t give me any of your shitty papers, you get an A". Slavoj wants to be nice. He really does. Really, really, he does. And he would be nice if he weren't always surrounded by stupid, incompetent jerks unlike himself.

The next carnival will be hosted in a month at Siris. You may submit suggestions for inclusion in the next carnival (from your own blog or favorite posts from others' blogs) at the Philosopher's Carnival homepage.

[Revised 2:22 pm.]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Monday, 09 Jun 2014 15:11
There isn't a lot of philosophical (or even psychological) work on schadenfreude -- the pleasure people sometimes feel at witnessing or hearing about (but not personally causing) the suffering of others. But the most prominent analyses treat it as a type of pleasure one feels seeing someone get their comeuppance. John Portman calls schadenfreude "an emotional corollary of justice" (2000, p. 197*). Aaron Ben-Ze'ev suggests that a typical feature is that the sufferer deserves the misfortune (1992, p. 41). Frans de Waal suggests that schadenfreude "derives from a sense of fairness" (1996, p. 85).

We could define schadenfreude as involving just deserts, for the sake of philosophical analysis. But doing so misses, we think, central cases that should be within the term's scope and which give it its uncomfortable moral coloring.

Consider that staple of "America's Funniest Videos", the groin shot:

And the trampoline accident:

It doesn't seem that these are instances of justice delivered. We are laughing at -- seemingly enjoying -- pain, indifferent to whether it is deserved. If we stipulate that schadenfreude requires desert, we would need a different name for this interesting phenomenon. But rather than do that, let's acknowledge that there are at least two different types of schadenfreude: just-deserts schadenfreude, when the bad guy finally gets what's coming to him, and the comic schadenfreude of America's Funniest Videos and FailBlog. Comic schadenfreude seems to require not justice but rather a kind of absurdity involving pain as an integral component. And unlike the schadenfreude of just deserts, where pleasure can sometimes be found when inexpiable wrongdoing is met with severe pain, comic schadenfreude might require that the injury (or pain) not be too serious.

Still another species of the genus seems to involve neither comic absurdity nor justice: the schadenfreude of grace.

Here's Lucretius:

Sweet it is, when on the great sea the winds are buffeting the waters, to gaze from the land on another's great struggles; not because it is pleasure or joy that any one should be distressed, but because it is sweet to perceive from what misfortune you yourself are free. Sweet is it too, to behold great contests of war in full array over the plains, when you have no part in the danger (On the Nature of Things, Book II.1ff., Bailey trans.).
And Hobbes:
from what passion proceedeth it, that men take pleasure to behold from the shore the danger of them that are at sea in a tempest, or in fight, or from a safe castle to behold two armies charge one another in the field? It is certainly in the whole sum joy, else men would never flock to such a spectacle. Nevertheless there is in it both joy and grief. For as there is a novelty and remembrance of own security present, which is delight; so is there also pity, which is grief. But the delight is so far predominant, that men usually are content in such a case to be spectators of the misery of their friends (Human Nature, IX.19).

Evidently, people throughout the ages have found great pleasure standing atop the bluff in a storm, watching sailors below die on the rocks. Lucretius and Hobbes suggest, plausibly we think, that for many viewers an important part of the the pleasure derives from how salient another’s suffering makes your own safety by comparison. Similarly, perhaps, reading a history of war and genocide can put into perspective one's own complaints about the erroneous telephone bill and the journal rejections.

Indeed, the very fact that the suffering of the others is undeserved lends the schadenfreude of grace its particular bittersweet flavor. If the sailors or soldiers were fools or villains then it's maybe just harsh justice to see them die from their bad choices, and we have something closer to the schadenfreude of just deserts; but if they did nothing wrong or foolish and it could just as easily have been you, then it's both more a shame for them (the bitter) and also more vividly pleasing how lucky you yourself are (the sweet): There but for undeserved grace go I.

The schadenfreude of just deserts, comic schadenfreude, and the schadenfreude of grace do not exhaust the list of schadenfreudes, we think. There are at least two more: the schadenfreude of envy, and pathological forms of erotic schadenfreude (not to be confused with consensual play-acting sadism). We also suspect that these different types of schadenfreude can sometimes merge into a single complex emotion.

Probably no unified analysis of the psychological mechanisms suffices to cover all types, and they differ substantially in what they reveal about the moral character of the person who is moved by them. Comeuppance is only the start of it.

---------------------------------------------------

* Though comeuppance seems to be Portman's take-home message, his overall view is nuanced and anticipates some of the points of this post.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "moral psychology"
Send by mail Print  Save  Delicious 
Date: Wednesday, 04 Jun 2014 10:21
My theory of the jerk is out in Aeon.

From the intro:

Picture the world through the eyes of the jerk. The line of people in the post office is a mass of unimportant fools; it’s a felt injustice that you must wait while they bumble with their requests. The flight attendant is not a potentially interesting person with her own cares and struggles but instead the most available face of a corporation that stupidly insists you shut your phone. Custodians and secretaries are lazy complainers who rightly get the scut work. The person who disagrees with you at the staff meeting is a dunce* to be shot down. Entering a subway is an exercise in nudging past the dumb schmoes.

We need a theory of jerks. We need such a theory because, first, it can help us achieve a calm, clinical understanding when confronting such a creature in the wild. Imagine the nature-documentary voice-over: ‘Here we see the jerk in his natural environment. Notice how he subtly adjusts his dominance display to the Italian restaurant situation…’ And second – well, I don’t want to say what the second reason is quite yet.

[continue]

----------------------------------------------------

* Instead of "dunce" the original piece uses "idiot". In light of Shelley Tremain's remarks to me about the history of that word, I'm wondering whether I should have avoided it. In my mind, it is exactly the sort of word the jerk is prone to use, and how he is prone to think of people, so there's a conflict here between my desire to capture the worldview of the jerk with phenomenological accuracy and my newly heightened sensitivity to the historical associations of that particular word.

[illustration by Paul Blow]

Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Tuesday, 03 Jun 2014 11:07
It might help convince the audience of my seriousness if I wear it at my next public lecture.
Order here.
Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Tuesday, 03 Jun 2014 10:55
The stats aren't totally straightforward, since I switched counters a few years ago, but it looks like The Splintered Mind has had more than two million pageviews since I launched it in 2006.

That's about how many views this video gets in 16 hours:

Which is, you know, actually way better than I would have guessed.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Sunday, 01 Jun 2014 11:20
Here. A sample:
Personally, I give Giulio enormous credit for having the intellectual courage to follow his theory wherever it leads. When the critics point out, “if your theory were true, then the Moon would be made of peanut butter,” he doesn’t try to wiggle out of the prediction, but proudly replies, “yes, chunky peanut butter—and you forgot to add that the Earth is made of Nutella!”
(And I thought I played rough.)
Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Friday, 30 May 2014 10:51
Must an infinitely continued life inevitably become boring? Bernard William famously answers yes; John Fischer no. Fischer's case is perhaps even more easily made than he suggests -- but its very ease opens up new issues.

Consider Neil Gaiman's story "The Goldfish Pool and Other Stories" (yes, that's the name of one story):

He nodded and grinned. "Ornamental carp. Brought here all the way from China."

We watched them swim around the little pool."I wonder if they get bored."

He shook his head. "My grandson, he's an ichthyologist, you know what that is?"

"Studies fishes."

"Uh-huh. He says they only got a memory that's like thirty seconds long. So they swim around the pool, it's always a surprise to them, going 'I've never been here before.' They meet another fish they known for a hundred years, they say, 'Who are you, stranger?'"

The problem of immortal boredom solved: Just have a bad memory! Then even seemingly un-repeatable pleasures (meeting someone for the first time) become repeatable.Now you might say, wait, when I was thinking about immortality I wasn't thinking about forgetting everything and doing it again like a stupid goldfish. To this I answer: Weren't you? If you were imagining that you were continuing life as a human, you were imagining, presumably, that you had a finite brain capacity. And there's only so much memory you can fit into eighty billion neurons. So of course you're going to forget things, at some point almost everything, and things sufficiently well forgotten could presumably be experienced as fresh again. This is always what is going on with us anyway, to some extent. And this forgetting needn't involve any loss of personal identity, it seems: one's personality and some core memories could always stay the same. Immortality as an angel or transhuman super-intellect raises the same issues, as long as one's memory is finite. A new question arises perhaps more vividly now: Is repeating and forgetting the same types of experiences over and over again, infinitely, preferable to doing them once, or twenty times, or a googolplex times? The answer to that question isn't, I think, entirely clear (and maybe even faces metaphysical problems concerning the identity of indiscernibles). My guess, though, is that if you stopped one of the goldfish and said, "Do you want to keep going?", the fish would say, "Yes, this is totally cool, I wonder what's around the corner? Oh, hi, glad to meet you!" Maybe that's a consideration in favor. Alternatively, you might imagine an infinite memory. But how would that work? What would that be like? Would one become overwhelmed like Funes the Memorious? Would there be a workable search algorithm? Would there be some tagging system to distinguish each memory from infinitely many qualitatively identical other memories? Or maybe you were imagining retaining your humanity but somehow existing non-temporally? I find that even harder to conceive. To evaluate such possibilities, we need a better sense of the cognitive architecture of the immortal mind. Supposing goldfish-pool immortality would be desirable, would it be better to have, as it were, a large pool -- a wide diversity of experiences before forgetting -- or a small, more selective pool, perhaps one peak experience, repeated infinitely? Would it be better to have small, unremembered variations each time, or would detail-by-detail qualitative identity be just as good? I've started to lose my grip on what might ground such judgments. However, it's possible that technology will someday make this a matter of practical urgency. Suppose it turns out, someday, that people can "upload" into artificial environments in which our longevity vastly outruns our memorial capacity. What should be the size and shape of our pool?

[image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "cosmology, crazyism, personal identity, ..."
Send by mail Print  Save  Delicious 
Ergo   New window
Date: Tuesday, 27 May 2014 17:15
Ergo, a new online philosophy journal, has just released its first issue. Open access, triple anonymous, fast turnaround times (hopefully continuing into the future), transparent process, aiming at a balanced representation of all the subdisciplines. What's not to like?

I hope and expect that this journal will soon count among the most prestigious venues in philosophy.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "announcements"
Send by mail Print  Save  Delicious 
Date: Friday, 23 May 2014 13:18
Why should a philosopher care about the nature of belief? Back in the 1980s and 1990s, when I was a student, there were two main animating reasons in the Anglophone philosophical community. Recently, though, the literature has changed.

One of the old-school reasons was to articulate the materialistic picture of the world. The late 1950s through the early 1990s -- roughly from Smart's "Sensations and Brain Processes" through Dennett's Consciousness Explained -- was (I now think) the golden age of materialism in the philosophy of mind, when the main alternatives and implications were being seriously explored by the philosophical community for the first time. We needed to know how belief fit into the materialist world-picture. How could a purely material being, a mere machine fundamentally constituted of tiny bits of physical stuff bumping against each other, have mental states about the world, with real representational or intentional content? The functionalism and evolutionary representationalism of Putnam, Armstrong, Dennett, Millikan, Fodor, and Dretske seemed to give an answer.

The other, related, motivating reason was the theory of reference in philosophy of language. How is it possible to believe that Superman is strong but that Clark Kent is not strong, if Superman really is Clark Kent (Frege's Puzzle)? And does the reference of a thought or utterance depend only on what was in the head (internalism) or could two molecule-for-molecule identical people have different thought contents simply because they're in different environments (externalism). Putnam's Twin Earth was amazingly central to the field. (In 2000, Joe Cruz and I sketched out a "map of the analytic philosopher's brain". Evidence seemed to suggest a major lobe dedicated entirely to Twin Earth, but only a small nodule for the meaning of life.)

These inquiries had two things in common: their grand metaphysical character -- defending materialism, discovering the fundamental nature of thought and language -- and their armchair methodology. Some of the contributors such as Fodor and Dennett were very empirically engaged in general, but when it came to their central claims about belief, they seemed to be mainly driven by thought experiments and a metaphysical world vision.

Literature on the nature of belief has been re-energized in the 2010s, I think, by issues less grand but more practical -- especially the issue of implicit bias, but more generally the question of how to think about cases of seeming mismatch between explicit thought or speech and spontaneous behavior. Tamar Gendler's treatment of (implicit) alief vs. (explicit) belief, especially, has spawned a whole subliterature of its own, which is intimately connected with the recent psychological literature on dual process theory or "thinking fast and slow". Does the person who says, in all attempted sincerity, "women are just as smart as men", but who (as anyone else could see) consistently treats women as stupid, believe what he's saying? Delusions present seemingly similar cases, such as the Cotard delusion which involves seemingly sincerely claiming that one is dead. What are we to make of that? There's a suddenly burgeoning philosophical subliterature on delusion, much of it responding to Lisa Bortolotti's recent APA prizewinning book on the topic.

By most standards, the issues are still grand and impractical and the approach armchairish -- this is philosophy after all! -- but I believe their metaphilosophical spirit is very different. What animates Gendler, Bortolotti, and the others, I think, is a hard look at particularly puzzling empirical issues, where it seems that a good philosophical theory of the nature of the phenomena might help clear things up, and then a pragmatic approach to evaluating the results. Given the empirical phenomena, are our interests best served by theorizing belief in this way, or are they better served by theorizing in this other way?

This is music to my ears, both metaphilosophically and regarding the positive theory of belief. Metaphilosophically, because it is exactly my own approach: I entered the literature on belief as a philosopher of science interested in puzzles in developmental psychology that I thought could be dissolved through application of a good theory of belief. And at level of the positive theory of belief, because my own theory of belief is designed exactly to shine as means of organizing our thoughts about such splintering (The Splintered Mind!), seemingly messed-up cases.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "attitudes, belief, metaphilosophy"
Send by mail Print  Save  Delicious 
Date: Friday, 16 May 2014 11:15
I've been thinking recently about group organisms and group minds. And I've been thinking, too, about the Fermi Paradox -- about why we haven't yet discovered alien civilizations, given the vast number of star systems that could presumably host them. Here's a thought on how these two ideas might meet.

Species that contain relatively few member organisms, in a small habitat, are much more vulnerable to extinction than are species that contain many member organisms distributed widely. A single shock can easily wipe them out. So my thought is this: If technological civilizations tend to merge into a single planetwide superorganism, then they become essentially species constituted by a single organism in one small habitat (small relative to the size of the organism) -- and thus highly vulnerable to extinction.

This is, of course, a version of the self-destruction solution to Fermi's paradox: Technological civilizations might frequently arise in the galaxy, but they always destroy themselves quickly, so none happen to be detectable right now. Self-destruction answers to Fermi's paradox tend to focus on the likelihood of an immensely destructive war (e.g., nuclear or biological), environmental catastrophe, or the accidental release of destructive technology (e.g., nanobots). My hypothesis is compatible with all of those, but it's also, I think, a bit different: A single superorganism might die simply of disease (e.g., a self-replicating flaw) or malnutrition (e.g., a risky bet about next year's harvest) or suicide.

For this "solution" -- or really, at best I think, partial solution -- to work, at least three things would have to be true:

(1.) Technological civilizations would have to (almost) inevitably merge into a single superorganism. I think this is at least somewhat plausible. As technological capacities develop, societies grow more intricately dependent on the functioning of all their parts. Few Californians could make it, now, as subsistence farmers. Our lives are entirely dependent upon a well-functioning system of mass agriculture and food delivery. Maybe this doesn't make California, or the United States, or the world as a whole, a full-on superorganism yet (though the case could be made). But if an organism is a tightly integrated system each of whose parts (a.) contributes in a structured way to the well-being of the system as a whole and (b.) cannot effectively survive or reproduce outside the organismic context, then it's easy to see how increasing technology might lead a civilization ever more that direction -- as the individual parts (individual human beings or their alien equivalents) gain efficiency through increasing specialization and increased reliance upon the specializations of others. Also, if we imagine competition among nation-level societies, the most-integrated, most-organismic societies might tend to outcompete the others and take over the planet.

(2.) The collapse of the superorganism would have to result in the near-permanent collapse of technological capacity. The individual human beings or aliens would have to go entirely extinct, or at least be so technologically reduced that the overwhelming majority of the planet's history is technologically primitive. One way this might go -- though not the only way -- is for something like a Maynard Smith & Szathmary major transition to occur. Just as individual cells invested their reproductive success into a germline when they merged into multicellular organisms (so that the only way for a human liver cell to continue into the next generation is for it to participate in the reproductive success of the human being as a whole), so also human reproduction might become germline-dependent at the superorganism level. Maybe our descendents will be generated from government-controlled genetic templates rather than in what we now think of as the normal way. If these descendants are individually sterile, either because that's more efficient (and thus either consciously chosen by the society or evolutionarily selected for) or because the powers-that-be want to keep tight control on reproduction, then there will be only a limited number of germlines, and the superorganism will be more susceptible to shocks to the germline.

(3.) The habitat would have to be small relative to the superorganism, with the result that there were only one or a few superorganisms. For example, the superorganism and the habitat might both be planet sized. Or there might be a few nation-sized superorganisms on one planet or across several planets -- but not millions of them distributed across multiple star systems. In other words, space colonization would have to be relatively slow compared to the life expectancy of the merged superorganisms. Again, this seems at least somewhat plausible.

To repeat: I don't think this could serve as a full solution to the Fermi paradox. If high-tech civilizations evolve easily and abundantly and visibly, we probably shouldn't expect all of them to collapse swiftly for these reasons. But perhaps it can combine with some other approaches, toward a multi-pronged solution.

It's also something to worry about, in its own right, if you're concerned about existential risks to humanity.

[image source]

Author: "Eric Schwitzgebel (noreply@blogger.com)"
Send by mail Print  Save  Delicious 
Date: Monday, 12 May 2014 13:10
My latest in crazy, dijunctive metaphysics:

Abstract:

A 1% skeptic is someone who has about a 99% credence in non-skeptical realism and about a 1% credence in the disjunction of all radically skeptical scenarios combined. The first half of this essay defends the epistemic rationality of 1% skepticism, appealing to dream skepticism, simulation skepticism, cosmological skepticism, and wildcard skepticism. The second half of the essay explores the practical behavioral consequences of 1% skepticism, arguing that 1% skepticism need not be behaviorally inert.
Full version here.

(What I mean by crazy metaphysics.)(What I mean by disjunctive metaphysics.)

As always, comments/reactions/discussion welcome, either as comments on this post or by direct email to me.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "metaphilosophy, metaphysics, skepticism"
Send by mail Print  Save  Delicious 
Date: Friday, 09 May 2014 10:13
Ruth Millikan's Dewey Lecture has been getting a lot of favorable attention in the blogosphere recently. Rightly so. But I want to challenge one part of it that most philosophers seem to like. I'll quote almost all of the relevant two paragraphs, since when I try to trim I think I do a disservice to her argument.

Millikan writes:

Philosophy is not a field in which piles of small findings later help to secure fundamental advances. Little philosophical puzzles do not usually need to be solved but rather dissolved by examining the wider framework within which they occur. This often involves determinedly seeking out and exposing deeply entrenched underlying assumptions, working out what their diverse and far-ranging effects have been, constructing and evaluating alternatives, trying to foresee distant implications. It often involves trying to view quite large areas in new ways, ways that may cut across usual distinctions both within philosophy and outside and that may require a broad knowledge across disciplines. Add that to acquire the flexibility of mind and the feel for the possibility of fundamental change in outlook that may be needed, a serious immersion for a considerable time in the history of philosophy is a near necessity. This kind of work takes a great deal of patience and it takes time. Nor can it be done in small pieces, first this little puzzle then that. Kant published the Critique of Pure Reason at age fifty-seven and the other critiques came later. Closer to our time, Wilfrid Sellars published his first paper at thirty-five, having lived and worked with philosophy all his life up to then. I have never tried to research the matter but I have no reason to think these cases unique.... Further, because a serious understanding of the historical tradition is both essential and quite difficult to acquire by oneself, helping to pass on this tradition with care and respect should always be the first obligation of a professional philosopher. Given all this, it has always struck me as a no-brainer that forcing early and continuous publication in philosophy is, simply, genocidal. Forcing publication at all is not necessarily good.

In philosophy there are no hard data. And there are no proofs. Both in the writing and in the reviewing, deep intellectual honesty and integrity are the only checks on quality. This cannot be hurried. Authors who discover their errors must be free sometimes just to start over. They need time to be sure that their use of sources is accurate. Reviewers need time to digest and to check sources themselves when not already familiar with them, nor should they feel under pressure to pass on essays out of sympathy for the impossible position of young people seeking jobs or tenure. Unread journals should not be proliferating to accommodate, mainly, the perceived needs of administrators to keep their institutions competitive. What we philosophers are after is not something one needs to compete for, nor will more philosophical publications result in more jobs for philosophers. Necessarily, carrots and sticks produce cheapened philosophy (p. 10-11).

Related concerns about the "hyperprofessionalization" of philosophy were expressed by Rebecca Kukla in a recent blog post at Leiter Reports and endorsed in many of the comments on that post.

Much of what Millikan says resonates with philosophical readers for good reason. But I'm not sure that the discipline is harmed by expecting philosophy professors at research universities to publish at the rates typically required for tenure.

Imagine a first-year graduate student complaining that submitting papers for evaluation by her professors interferes with her ability to construct radical, paradigm-overturning ideas. I would respond that putting one's words to the page for evaluation by others is philosophizing. And even if not all your ideas are ready for critical scrutiny, it's a problem if none of them are. Philosophical ideas are generally developed by shaping them into words and arguments, either orally or in writing, and exposing those words to critique.

Of course assistant professors are not graduate students and journal referees are not professors grading seminar papers, but it's not clear that to me that it harms philosophy to insist that research-oriented philosophy professors shape their ideas into articles or books that experts in their area judge to be of respectable quality. This seems to me no different from simply insisting that they do what their peers and elders judge to be good philosophy. (Maybe one could do purely oral philosophy? Or purely blogosphere philosophy? Sure, if it's good enough! But keep a record of it.)

It would be bad to force people to publish at a rate that substantially impairs quality. But I don't think that a good, substantial article or two per year, on average, is too much to ask of professors at universities with substantial research expectations and moderate teaching loads. It leaves sufficient time for creative thought, for delving into the history, for considering revolutionary new ideas that aren't yet ready for print. Often, working on such articles is the very occasion for such thought and delving and potential revolutionizing, even if not all of that work is manifest in the article itself. And since we are human, it often helps to have carrots and sticks, as long as they are not abusively or narrow-mindedly employed.

(Caveat: I'm not arguing that these expectations should be combined with the up-or-out tenure system of most U.S. research universities, which is arguably inhumane. May I save that issue for another occasion?)

Is the field producing less "great" philosophy because people are publishing too early? That's hard to judge without historical distance, but I don't see why we should think so. For one thing, it's not clear that we aren't now producing great philosophy at rates so different from what would otherwise be expected. Even if we are producing less great philosophy now than we did in the latter half of the 20th century, possibly the latter half of the 20th century was a golden era for philosophy, in which case we ought to expect some deviation toward the historical mean. And even if we're in more serious decline than that, I doubt the cause is that we expect philosophy professors to publish their work. Looking both historically into the past and cross-culturally to non-Anglophone countries, I suspect that advancement based on publication record tends to produce more great philosophy overall than do cozier arrangements of advancement based on... what? Whether you seem smart?

What I elided in the first quoted paragraph from Millikan was a parenthetical citation of two of my blog posts on the ages at which philosophers do their best work. In those posts I present evidence that philosophers do their best work at a broad range of ages (mostly late 20s through early 60s) -- a broader range of ages, apparently, than do scientists. This partly supports Millikan's point that philosophers often peak late; but it also partly undercuts her point. Berkeley, Ayer, Hume, Moore, Marx, Russell, and Ramsey all did their most-discussed work when they were in their 20s. Although Kant and Sellars were not highly unusual, they were also not typical. Great philosophy is done across the life span, and revolutionary ideas come from the newcomers and the young as often as they come from those steeped in decades of deep study.

Millikan raises good points. There's much that I agree with in her reservations about the professionalization of our field. But let me add the thoughts above as a bit of a counterweight. I don't think our field's publication expections have begun to approach genocidal levels.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "professional issues in philosophy"
Send by mail Print  Save  Delicious 
Date: Tuesday, 29 Apr 2014 10:38
When I was in Berlin in 2010, I spent some time in the Humboldt University library, looking through philosophy journals from the Nazi era, in connection with my interest in the extent to which German philosophers either embraced or resisted Nazism. (Summary version: about 1/3 embraced Nazism, about 1/3 rejected Nazism, and about 1/3 ducked their heads and kept quiet.)

The journals differed in their degree of Nazification. Perhaps the most Nazified was Kant-Studien, which at the time was one of the leading German-language journals of general philosophy (not just a journal for Kant scholarship). The old issues of Kant-Studien aren't available online, but I took some photos. Here, Sascha Fink and I have translated the preface to Kant-Studien Vol. 40 (1935), p. 3-4 (emphasis added):

***********************************

Kant-Studien, now under its new leadership that begins with this first issue of the 40th volume, sets itself a new task: to bring the new will, in which the deeper essence of the German life and the German mind is powerfully realized, to a breakthrough in the fundamental questions as well as the individual questions of philosophy and science.

Guiding us is the conviction that the German Revolution is a unified metaphysical act of German life, which expresses itself in all areas of German existence, and which will therefore – with irresistible necessity – put philosophy and science under its spell.

But is this not – as is so often said – to snatch away the autonomy of philosophy and science and give it over to a law alien to them?

Against all such questions and concerns, we offer the insight that moves our innermost being: That the reality of our life, that shapes itself and will shape itself, is deeper, more fundamental, and more true than that of our modern era as a whole – that philosophy and science, which compete for it, will in a radical sense become liberated to their own essence, to their own truth. Precisely for the sake of truth, the struggle with modernity – maybe with the basic norms and basic forms of the time in which we live – is necessary. It is – in a sense that is alien and outrageous to modern thinking – to recapture the form in which the untrue and fundamentally destroyed life can win back its innermost truth – its rescue and salvation. This connection of the German life to fundamental forces and to the original truth of Being and its order – as has never been attempted in the same depth in our entire history – is what we think of when we hear that word of destiny: a new Reich.

If on the basis of German life German philosophy struggles for this truly Platonic unity of truth with historical-political life, then it takes up a European duty. Because it poses the problem that each European people must solve, as a necessity of life, from its own individual powers and freedoms.

Again, one must – and now in a new and unexpected sense, in the spirit of Kant’s term, “bracket knowledge” [das Wissen aufzuheben]. Not for the sake of negation: but to gain space for a more fundamental form of philosophy and science, for the new form of spirit and life [für die neue Form ... des Lebens Raum zu gewinnen]. In this living and creative sense is Kant-Studien connected to the true spirit of Kantian philosophy.

So we call on the productive forces of German philosophy and science to collaborate in these new tasks. We also turn especially to foreign friends, confident that in this joint struggle with the fundamental questions of philosophy and science, concerning the truth of Being and life, we will gain not only a deeper understanding of each other, but also develop an awareness of our joint responsibility for the cultural community of peoples.

-- H. Heyse, Professor of Philosophy, University of Königsberg

***********************************

In the 1910s through 1930s, especially in Germany, philosophers tended to occupy the political right (including cheering on World War I and ostracizing Bertrand Russell for not doing so) -- deploying, as here, the tools of their discipline in the service of what we can now recognize as hideous views. Heidegger was by no means alone in doing so, nor the worst offender.

The political views of the mainstream 21st-century philosophical community are very different and, I'd like to think, much better grounded. It would be nice, though, if we had a more trustworthy method for distinguishing tissues of noxious rationalization from real philosophical insight.

***********************************

For a transcription of the original German, see the Underblog.

For a fuller historical discussion of the role of Kant-Studien in the Third Reich, see this article (in German).

If you zoom in on the title-page image above, you will see that it promises two pictures of Elisabeth Foerster-Nietzsche, Nietzsche's famously antisemitic sister. The volume does include two full-page photos of her (though one appears to be merely a close-up of the other), alongside a fawning obituary of the "wise, gracious" Elisabeth.

Author: "Eric Schwitzgebel (noreply@blogger.com)" Tags: "ethics professors"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader