Maja provides examples of two such introspection-reliant abilities: choosing the right amount of food to order at a restaurant (which relies on knowledge of how hungry you feel) and adjusting binoculars (which relies on knowing how blurry the target objects look).
Let's consider, then, Maja's binoculars. What, exactly, are you noticing when you adjust your binoculars?
Consider other cases of optical reflection and refraction. I see an oar half-submerged in water. In some sense, maybe, it "looks bent". Is this a judgment about my visual experience of the oar or instead a judgment about the optical structure of my environment? -- or a bit of both? How about when I view myself in the bathroom mirror? Normally during my morning routine, I seem to be reaching judgments primarily about my body and secondarily about the mirror, and hardly at all about my visual experience. (At least, not unless we accept some kind of introspective foundationalism on which all judgments about the outside world are grounded on prior judgments about experience.) Or suppose I'm just out of the shower and the mirror is foggy; in this case my primary judgment seems to concern neither my body nor my visual experience but rather the state of the mirror. Similarly, perhaps, if the mirror is warped -- or if I am looking through a warped window, or a fisheye lens, or using a maladjusted review mirror at night, or looking in a carnival mirror.
If I can reach such judgments directly without antecedent judgments about my visual experience, perhaps analogously in the binoculars case? Or is there maybe instead some interaction among phenomenal judgments about experience and optical or objectual judgments, or some indeterminacy about what sort of judgment it really is? We can create, I'm inclined to think, a garden path between the normal bathroom mirror case (seemingly not an introspective judgment about visual experience) and the blurry binoculars case (seemingly an introspective judgment about visual experience), going either direction, and thus into self-contradiction. (For related cases feeding related worries see my essay in draft The Problem of Known Illusion and the Resemblance of Experience to Reality.)
Another avenue of resistance to the binoculars case might be this. Suppose I'm looking at a picture on a rather dodgy computer monitor and it looks too blue, so I blame the monitor and change the settings. Arguably, I could have done this without introspection: I reach a normal, non-introspective visual judgment that the picture of the baby seal is tinted blue. But I know that baby seals are white. So I blame the monitor and adjust. Or maybe I have a real baby seal in a room with dubious, theatrical lighting. I reach a normal visual assessment of the seal as tinted blue, so I know the lighting much be off and I ask the lighting techs to adjust. Similarly perhaps, then, if I gaze at a real baby seal through blurry binoculars: I reach a normal, non-introspective judgment of the baby seal as blurry-edged. But I know that baby seals have sharp edges. So I blame the binoculars and adjust. Need this be introspective at all, need it concern visual experience at all?
In the same way, maybe, I see spears of light spiking out of streetlamps at night -- an illusion, some imperfection of my eyes or eyewear. When I know I am being betrayed by optics, am I necessarily introspecting, or might I just be correcting my normal non-introspective visual assessments?
This is a nest of issues I am struggling with, making less progress than I would like. Maybe Maja is right, then. But if so, it will take further work to show.
Her son followed but did not understand. After a while, he walked toward school.
That afternoon, the phone rang in her car. That afternoon, she received a parking ticket. That evening, her husband came and sat with her beneath the tree. He said some words that seemed like gentle pleading. He left, he came back, he fell asleep at her feet while she sat.
Dawn speared through the eucalyptus, painting patches on the perfect tree. The perfect tree had a thousand red elbows. The perfect tree offered the world its berries, its light, its air, its scent of apple, of dust, chocolate, rubber, marjoram, closet floors. Its leaves were a chaos on which it would be impossible to improve. She breathed the oxygen of its photosynthesis. She drew a finger across a branch, leaving an invisible trace of her skin’s oil. Her husband brought breakfast, cancelled her classes, defended her rights against the police. A friend drove her car away.
Other friends sat a while. They spoke to her, her husband spoke to her, some people spoke to her, she drank some water. The perfect tree formed subtly different shapes as air passed through it. The crumbling blacktop at the edge of the median was the ideal frame for it, the peeling paint of the fence, the thistle, the arms of other trees, the sky, the ants, together the perfect symbiosis. A person finally pulled her away. She resisted only through limpness. In the back of her husband’s car she dreamed the tree, in her bed for three days she dreamed the tree.
The house was finally empty but for her. She walked out the front door without closing it, she walked upon hot concrete under the sun’s radiant plasma, she walked along a swift boulevard where the cars sang harmonic waves, she walked onto the road with the tree. She arrived. She sat.
Nothing could be more important than this. It is the culmination of all things. It is the impossible beauty. It is the secret reason God made the world. Death spins circles, the Earth is a circle spinning circles around a bright circle spinning among a hundred billion bright circles, and this tree, though only she could know it, was the tangent they had touched for the perfect fading instant.
They danced three songs in this manner, then lay together upon the sand. Sarah touched the back of her neck with a twig and saw Abraham scratch his own neck. They made love in a new way.
Sarah gazed upon Abraham as he observed the sky. She called to Ishbak and Midian. Sarah and Abraham lay face down upon blankets while the sons of Keturah touched their backs, and Sarah learned to distinguish Abraham’s sensations from her own. She now had two backs, two bodies.
Through a harmony of retinas, Sarah came to see through Abraham’s eyes. At first, it was a faint tint upon her field of view – her own form, maybe, bent toward the fire pit, as Abraham watched her from the side, her figure like a wraith upon the fire that she more vividly saw. The wraith was jumpy, unpredictable; she could not fully guess Abraham’s saccades as his eyes gathered the scene. Over days, the visions livened and settled. Sarah did not know if she merely learned better to anticipate Abraham’s eye movements or if she instead also gained partial control.
Through a harmony of cochlear hair cells, Sarah came to hear through Abraham’s ears, including the deep-sea deafness on his left. Through a harmony of gustatory nerve fibers, Sarah tasted coconut through Abraham’s lips – more simple and sweet, somehow, than through her own. She began to feel his bodily needs.
One night, Sarah awoke from a dream, beneath the canopy of leaves and stars, and moved Abraham’s body rather than her own.
Through Abraham’s body, Sarah and Abraham together made love to Keturah. Together they split wood through Abraham and peeled mangoes through Sarah. Sarah gave little Esau a ride upon her back and remembered, as the child Abraham, having ridden upon Terah’s back while Meliah tossed stones into the waves. Abraham, leading a prayer, recalled the child Sarah watching her father kneel windward.
SarahAbraham could count the fish through Abraham’s eyes more swiftly than Abraham had been able to do alone. Sarah might count those on the left, Abraham those on the right, while Sarah’s body, elsewhere, paused in thought. SarahAbraham could bring their knowledge together in advising Isaac. SarahAbraham softened Abraham’s hard righteousness, steadied Sarah’s timid gaze.
Sarah could no longer give to Abraham; it was all already his. She could not surprise him. She kissed Abraham and was kissing only herself.
SarahAbraham walked four-legged to the falls and thought of Keturah. Keturah accepted new levels of intimacy, no longer merely concubine.
SarahAbraham waltzed with Keturah, Ishbak on the immutable piano, hand within hand within hand within hand within hand.
Sacks does not, I think, adequately distinguish two types of hallucination. I will call them doxastic and phenomenal. In a phenomenal hallucination of A, one has sensory experience as of A. In a doxastic hallucination, one thinks one has sensory experience as of A. The two can come apart.
Consider this description, from page 99 of Hallucinations (and attributed to Daniel Breslaw via David Ebin's book The Drug Experience).
The heavens above me, a night sky spangled with eyes of flame, dissolve into the most overpowering array of colors I have ever seen or imagined; many of the colors are entirely new -- areas of the spectrum which I seem to have hitherto overlooked. The colors do not stand still, but move and flow in every direction; my field of vision is a mosaic of unbelievable complexity. To reproduce an instant of it would involve years of labor, that is, if one were able to reproduce colors of equivalent brilliance and intensity.Here are two ways in which you might come to believe the above about your experience: (1.) You might actually have visual experiences of the sort described, including of colors entirely new and previously unimagined and of a complexity that would require years of labor to describe. Or (2.) you might shortcut all that and simply arrive straightaway at the belief that you are undergoing or have undergone such an experience -- perhaps with the aid of some unusual visual experiences, but not really of the novelty and complexity described. If the former, you have phenomenally hallucinated wholly novel colors. If the latter, you have merely doxastically hallucinated them.
The difference seems important -- crucial even, if we are going to understand the boundaries of experience as revealed by hallucination. And yet phenomenal vs. merely doxastic hallucinations might be hard to distinguish on the basis of verbal report alone, and almost by definition subjects will be apt to confuse the two. I can recall no point in the book where Sacks displays sensitivity to this issue.
Once I was attuned to it, the issue nagged at me again and again in reading:
Time was immensely distended. The elevator descended, "passing a floor every hundred years" (p. 100).The possibility of merely doxastic hallucination might arise especially acutely when subjects report highly unusual, almost inconceivable, experiences or incredible detail beyond normal perception and imagery; but of course the possibility is present in more mundane hallucination reports too.
Then my whole life flashed in my mind from birth to the present, with every detail that ever happened, every feeling and thought, visual and emotional was there in an instant (p. 102).
I have had musical hallucinations (when taking chloral hydrate as a sleeping aid) which were continuations of dream music into the waking state -- once with a Mozart quintet. My normal musical memory and imagery is not that strong -- I am quite incapable of hearing every instrument in a quintet, let alone an orchestra -- so the experience of hearing the Mozart, hearing every instrument, was a startling (and beautiful) one (p. 213).
(A fan of Dennett might suggest that there's no difference between the phenomenal and doxastic hallucinations; but I don't know what Dennett himself would say -- probably something more complex than that.)
The philosophers sent forth the great David K. Lewis in magician’s robes....
[See the remainder of this story here. Hint: It doesn't have a happy ending.]
Although at first it didn’t feel that way, I wonder if I have been harmed by philosophy. I gazed at the waterfall, thinking about Boltzmann brains – thinking, that is, about the possibility that I had no past, or a past radically unlike what I usually suppose it to be, instead having just then randomly congealed by freak chance from disorganized matter. On some ways of thinking about cosmology, there are many more randomly congealed brains, or randomly congealed brains-plus-local-pieces-of-environment, than there are intelligent beings who have arisen in what we think of as the normal way, from billions of years of evolution in a large, stable environment. If such a cosmology is true, then it might be much more likely that I have randomly congealed than that I have lived a full forty-five years in human form. The thought troubled me, but also the spark of doubt felt comfortable in a way. I am accustomed to skeptical roads.
Of course, most cosmologists flee from the Boltzmann brains hypothesis. If a cosmology implies that you are very likely a Boltzmann brain, that’s normally taken to be a reductio ad absurdum of that cosmology. But as I sat there thinking, I wondered if such dismissals arose more from fear of skepticism than from sound reasoning. I am no expert cosmologist, with a view very likely to be true about the origin and nature of the universe or multiverse and thus about the number of Boltzmann brains vs. evolved consciousnesses in existence – but neither are any professional cosmologists sufficiently expert to claim secure knowledge of these matters, the field is so uncertain and changing. As I gazed around Paradise Falls, the Boltzmann brain hypothesis started to seem impossible to assess. This seemed especially so to me given the limited tools at hand – not even an internet connection! – though I wondered whether having such tools would really help after all. Still, the world did not dissolve around me, as I suppose it must around most spontaneously congealed brains. So as I endured, I came to feel more justified in my preferred opinion that I am not a Boltzmann brain. However, I also had to admit the possibility that my seeming to have endured over the sixty seconds of contemplating these issues was itself the false memory of a just-congealed Boltzmann brain. My skepticism renewed itself, though somehow this second time only as a shadow, without the force of genuine doubt.
I considered the possibility that I was a computer program in a simulated environment. If consciousness can arise in programmed silicon chips, then presumably there’s something it’s like to be such a computerized consciousness. Maybe such computer consciousnesses sometimes seem to dwell in natural environments, fed with simulated visual inputs (for example of waterfalls), simulated tactile inputs (for example of sitting on a stone), and false memories (for example of having hiked to the waterfall that morning). If Nick Bostrom is right, there might be many more such simulated beings than naturally evolved human beings.
I considered Dan Dennett’s argument against skepticism: Throw a skeptic a coin, Dennett says, and “in a second or two of hefting, scratching, ringing, tasting, and just plain looking at how the sun glints on its surface, the skeptic will consume more information than a Cray supercomputer can organize in a year” (1991, p. 6). Our experience, he says, has an informational wealth that cannot realistically be achieved by computational imitation. In graduate school, I had found this argument tempting. But it seemed to me yesterday that my vision of the waterfall was not as high fidelity as that, and could easily be reproduced on a computer. I fingered the mud at my feet. The complexity of tactile sensation did not seem to me the sort of thing beyond the capacity of a computer artificially to supply, if we suppose a future of computers advanced enough to host consciousness. We are so eager to reject skepticism that we satisfy ourselves too quickly with weak arguments against it.
Now maybe John Searle is right, and no computer could ever host consciousness. Or maybe, though computer consciousness is possible, it is never actually achieved, or achieved only so rarely that the vast majority of conscious beings are organically evolved beings of the sort I usually consider myself to be. But I hardly felt sure of these possibilities.
The philosophers who most prominently acknowledge the possibility that they are simulated beings instantiated by computer programs don’t seem very worried by it. They don’t push it in skeptical directions. Nick Bostrom seems to think it likely that if we are in a simulation, it is a large stable one. David Chalmers emphasizes that if we are in a simulation scenario like that depicted in the movie The Matrix, skepticism needn’t follow. And maybe it is the case that the easiest and most common way to create an artificial consciousness is to evolve it up through a billion or a million years in a stable environment; and maybe the easiest, cheapest way to create seeming conversation partners is to give those seeming conversation partners real consciousness themselves, rather than making them Eliza-like shells of simple response patterns. But on the other hand, if I take the simulation possibility seriously, then I feel compelled to take seriously also the possibility that my memories are mostly false, that I am instantiated within a smallish environment of short duration, perhaps inside a child’s game. I am the citizen to be surprised when Godzilla comes through; I am the victim to be rescued by the child’s swashbuckling hero; I am the hero himself, born new and not yet apprised of my magic. Nor did I have, at that moment, a clever conversation partner to convince me of her existence. I might be Adam entirely alone.
Fred Dretske and Alvin Goldman say that as long as my beliefs have been reliably enough caused by my environment, by virtue of well-functioning perceptual and memory systems, then I know that there’s a real waterfall there, I know that I have hiked the two kilometers from my parents’ house. But this seems to be a merely conditional comfort. If my beliefs have been reliably enough caused…. But have they? And I was no longer sure I believed, in any case. What is it, to believe? I still would have bet on the existence of my parents’ house – what else could I do, since skepticism offers no advice? – but my feeling of doubtless confidence had evaporated. Had everything dissolved around me at that moment, though I would have been surprised, I would not have felt utter shock. I was not seamlessly sure that the world as I knew it existed beyond the ridge.
I turned to hike back, and as I began to mount the slope, I considered Descartes’s madman. In his Meditations on First Philosophy, Descartes seems to say that it would be madness seriously to consider the possibility that one is merely a madman, like those who believe they are kings when they are paupers or who believe their heads are made of glass. But why is it madness to consider this? Or maybe it is madness, but then, since I am now in fact considering it, should that count as evidence that I am mad? Am I a philosopher who works at U.C. Riverside, whom some readers take seriously, or am I indeed just a madman lost in weird skepticism, with merely confused and stupid thoughts? Somehow, this skepticism felt less pleasantly meditative than my earlier reflections.
I returned home. That afternoon, in philosophical conversation I told my father that I thought he did probably exist other than as a figment of my mind. It seemed the wrong thing to say. I wanted to jettison my remnants of skepticism and fully join the human community. I felt isolated and ridiculous. Fortunately, my wife then called me in for a round of living-room theater, and playing the fox to my daughter's gingerbread girl cured me of my mood.
I thought about writing up this confession of my thoughts. I thought about whether readers would relate to it or see me only as possessed for a day by foolish, laughable doubts. Sextus Empiricus was wrong; I have not found that skepticism leads to equanimity.
The truth is, there are actually two homunculi in there, they've been squabbling, and this is part of a divorce settlement.
[Revised April 23]
When he was four years old, my Oligarch wandered away from his caretakers to gaze into an oval fountain. At sixteen, he blushingly refused the kiss he had so desperately longed for. A week before his death, he made plans (which must now be postponed) to visit an old friend in Lak-Blilin. I, his mnemonist, have internalized all this. I remember it just as he does, see the same images, feel the same emotions as he does in remembering those things. I have adopted his attitudes, absorbed his personality. My whole life is arranged to know him as perfectly as one person can know another. My first twenty years I learned the required arts. Since then, I have concentrated on nothing but the Oligarch.
My Oligarch knows that to hide from me is to obliterate part of himself. He whispers to me his most shameful thoughts. I memorize the strain on his face as he defecates; I lay my hands on his tensing stomach. When my Oligarch forces himself on his friend’s daughter, I press against him in the dark. I feel the girl’s breasts as he does. I forget my sex and hallucinate his ejaculation.
At my fiftieth birthday, my Oligarch toasts me, raising and then drinking down his fine crystal cup of hemlock. As he dies, I study his face. I mimic his last breath. A newborn baby boy is brought and my second task begins.
By age three, the boy has absorbed enough of the Oligarch’s identity to know that he is the Oligarch now again, in a new body. A new apprentice mnemonist joins us now, waiting in the shadows. At age four, the Oligarch finally visits his friend in Lak-Blilin, apologizing for the long delay. He begins to berate his advisors as he always had, at first clumsily, in a young child’s vocal register. He comes to take the same political stands, comes to dispense the same advice. I am ever at his side helping in all this, the apprentice mnemonist behind me; his trust in us is instant and absolute. At age eight, the Oligarch understands enough to try to apologize to his friend’s daughter – though he also notices her hair again in the same way, so good am I.
My Oligarch boy does not intentionally memorize his old life. He recalls it with assistance. Just as I might suggest to you a memory image, wholly fake, of a certain view of the sea with ragged mountains and gulls, which you then later mistake for a real memory image from your own direct experience, so also are my suggestions adopted by the Oligarch, but uncritically and with absolutely rigorous sincerity on both sides. The most crucial memory images I paint and voice and verbally elaborate. Sometimes I brush my fingers or body against him to better convey the feel, or flex his limbs, or ride atop him, narrating. I give him again the oval fountain. I give him again the refused kiss.
A madman’s dream of being Napoleon is no continuation of Napoleon. But here there is no madness. My Oligarch’s memories have continuous properly-caused traces back to the original events, his whole psychology continued by a stable network of processes, as he well knows. His plans and passions, commitments and obligations, legal contracts, attitudes and resolutions, vengeances, thank-yous and regrets – all are continued without fail, if temporarily set aside through infancy as though through sleep.
The boy, now eleven, is only middling bold, though in previous form, my Oligarch had been among the boldest in the land. I renew my stories of bold heroes, remind him of his long habit of boldness, subtly condition and reinforce him. I push the boundaries of acceptable technique. Though I feel the dissonance sharply, the boy does not. He knows who he is. He feels he has only changed his mind.
My plan requires the truth of a psychological theory of personal identity, a "vehicle externalist" account of memory, and some radical social changes. But it requires no magic or computer technology, and arguably we could actually implement it.
Psychological theory of personal identity. Most philosophers think that personal identity over time is grounded by something psychological . Twenty-year-old you and forty-year-old you are (or will be) the same person because of some psychological linkage over time -- maybe continuity of memory, maybe some other sort of counterfactual-supporting causal connectedness between psychological states over time. Maybe traits, values, plans, and projects come into the picture, too. In practice, people don't have the right kind of psychological connectedness without having biological bodily continuity. But that, perhaps, is merely a contingent fact about us.
Vehicle externalism about memory. What is memory? If a madman thinks he is Napoleon remembering Waterloo, he does not remember Waterloo, even if by chance he happens upon exactly the same memory images as Napoleon himself had later in life. Memory requires, it seems, the right kind of causal connectedness to the original event. But need the relevant causal connectedness be entirely interior to the skull? Vehicle externalists about memory say no, there is nothing sacred about the brain-environment boundary. External objects can hold, or partly hold, our memories, if they are hooked up to us with the right kind of reliable causal chains. Consider Andy Clark's and David Chalmers's delightful short paper on Otto, whose ever-available notebook serves as part of his mind; or consider a science-fiction case in which part one's memory is temporarily transferred onto a computer chip and then later recovered.
Implementation. Could one's temporary memory reservoir be another person? I don't see why not, on a vehicle externalist account. And could the memories -- and the values and projects and whatever else is essential to personal identity -- then be transferred into another human body, for example, over the course of a decade or two into the body of a newborn baby as she grows up? I don't see why not, if we accept at least somewhat liberal versions of all the premises so far, and if we assume the most excellent possible shaping of that child.
By formatting a new child with your memories, your personality, your values, your projects, your loves, your hopes and regrets, you could thus continue into a new body. Presumably, you could continue this process indefinitely, taking a new body every fifty years or so.
As I said, a madman's dream of being Napoleon is no continuation of Napoleon. But the situation would be very different from that. There would be no madness. The memories would have well-preserved causal traces back to the original events; the crucial functional role of memory, to save those traces, would be perserved; everything would be held steadily in place by the person or people implementing this plan on your behalf, as a stable network of correctly functioning cognitive processes. And the result would be not just something on paper or in a memory chip but a consciously experienced memory image, felt by its owner to be a real authentic memory of the original event.
This could, it seems, be done with existing technology, using clever mnemonic and psychological techniques. One would need mnemonists who knew everything possible about you, who observed the same events and shared your same memories, and who were exceptionally skilled at preserving this information and transferring it to the child. The question then would be whether it would be true that the child, when she grew up, would really be you, with your authentic memories, instead of a mad Napoleon. And the answer to that question depends when whether certain theories of personal identity and memory are true. If the right theories are indeed true, then immortality -- or rather, longevity potentially in perpetuity -- would be possible for sufficiently wealthy and powerful people now, if they only chose to implement it.
I have written a story about this: The Mnemonists.
I'm dubious about the model and mechanism and curious about whether it will prove replicable by researchers with different theoretical perspectives. But still. How cool is that study?
Here's why: I'm working on a theory of jerks. This theory is aimed largely at the question of how you can know if you are, in fact, a jerk. (Do you know?) Toward this end, I've worked a bit on the phenomenology of being a jerk and on the "jerk-sucker ratio". Soon, I plan to propose a "jerk-sweetie spectrum". But before I get too deep into this, I'd appreciate some thoughts from people not much influenced by my theorizing. I want to check my theory against proposed cases. Also, I'd like to draw a "portrait of a jerk", and I need things to include in the portrait.
Favorite examples I will pull up into the body of this post as updates. (And I'll keep my ear out for examples via comments on this post as long as I actively maintain this blog, since comments filter into my email.) Also, readers who provide any examples that I incorporate in my portrait of a jerk will receive due name credit in the final published version of my planned paper on this topic.
But please: no names of individuals. And nothing that will clearly single out a particular individual. And if you sign your true name, please be careful to be sufficiently vague that you risk no reprisal from the perpetrator!
The anti-hero of my portrait will probably be an academic jerk, so academic examples are especially welcome. However, this jerk lives outside of academia too, and my theory of jerks is meant to apply broadly, so I need a good range of non-academic examples, too.
I've Googled "What a Jerk" as a source of examples to kick the thing off. Below are a few. No obligation to read them before diving in with your own.
I turned to see a tall bald man looking down at me as the train pulled in to the platform. I let two people in before me, and that's when I felt the push. As we turned toward the seats I felt another push on my back, and again looked at the man, who now released an annoyed huff of breath. What a jerk! I thought. Does he think that he's the only one who deserves a seat? Then I felt a poke on my shoulder, and in a loud angry voice the tall bald man said, "What are you looking at? You got a problem, buddy?"From Sarah Cliff (2001):
My AA, Maureen, flubbed a meeting time - scheduled over something else-and he really lit into her. Not the end of the world - she had made a mistake, and he had to rearrange an appointment - but he could have gotten the point across more tactfully. And she is *my* AA. (And I am *his* boss, and he did it in front of me.)From Richard Norquist (1961):
I know a college president who can be described only as a jerk. He is not an unintelligent man, nor unlearned, nor even unschooled in the social amenities. Yet he is a jerk cum laude, because of a fatal flaw in his nature--he is totally incapable of looking into the mirror of his soul and shuddering at what he sees there. A jerk, then, is a man (or woman) who is utterly unable to see himself as he appears to others. He has no grace, he is tactless without meaning to be, he is a bore even to his best friends, he is an egotist without charm.From Florian Meuck:
He is such an unlikeable character. You never invited him; he sat down on your sofa and hasn’t left since. He never stops talking, which is quite annoying. But it’s getting worse: he doesn’t like to talk about energetic, positive, uplifting stuff. No – it’s the opposite! He’s a total bummer! He cheats, he betrays, he deceives, he fakes, he misleads, he tricks, and he swindles. He is negative, sometimes even malicious. He’s a black hole! He promotes fear – not joy. He persuades you to think small – not big. He convinces you to incarcerate your potential – not to unlock it.Update, 4:43 p.m.:
Good comments so far! I'm finding this helpful. Thanks! I'm going to start pulling up some favorites into the body of the post, but that doesn't mean the others aren't helpful and interesting too.
* At the gym a few weeks ago. A man there (working out) had probably 10 weights of various sizes strewn in a wide radius around him, blocking other people's potential work-out space. I asked him if the weights were his, and he said "no - the person before me left them here, and I DON'T PICK UP OTHER PEOPLE'S WEIGHTS." [from anon 02:55 pm]
* the professor who has hard deadlines for their students, but then doesn't respond or reply promptly themselves, or expects perfection in writing but then has a syllabus and other written materials full of typos. [from Theresa]
* Anyone who blames low-level folk for problems that are obviously originating many levels higher up (or to the side). For example, berating a clerk for the store's return policy, the stewardess for the airline's cell phone rules, the waiter for the steak's doneness, etc. [from Jesse, 01:50 pm]
* If I'm descending the stairs towards the eastbound subway platform and I hear an approaching train, then I'll generally speed up if I see that the train is eastbound and I'll slow down if it's the westbound train. If there's no one in front of me on the stairs but there are several people following me, they'll use my change of pace as a signal re. whether the approaching train is eastbound or westbound. No one agreed on this tendency or explicitly recommended it. It's just a behaviour that arose spontaneously and became standard. So, if, on seeing that the train is indeed eastbound I deviate from the norm and slow my pace, thereby leading others behind me to slow down and miss the train, I'd say I've engaged in jerkish behaviour [from praymont, Apr 17]
But how to know if you're a jerk? It's not obvious. Some jerks seem aware of their jerkitude, but most seem to lack self-knowledge. So can you rule out the possibility that you're one of those self-ignorant jerks? Maybe a general theory of jerks will help!
I'm inclined to think of the jerk as someone who fails to appropriately respect the individual perspectives of the people around him, treating them as tools or objects to be manipulated, or idiots to be dealt with, rather than as moral and epistemic peers with a variety of potentially valuable perspectives. The characteristic phenomenology of the jerk is "I'm important, and I'm surrounded by idiots!" However, the jerk needn't explicitly think that way, as long as his behavior and reactions fit the mold. Also, the jerk might regard other high-status people as important and regard people with manifestly superior knowledge as non-idiots.
To the jerk, the line of people in the post office is a mass of unimportant fools; it's a felt injustice that he must wait while they bumble around with their requests. To the jerk, the flight attendant is not an individual doing her best in a difficult job, but the most available face of the corporation he berates for trying to force him to hang up his phone. To the jerk, the people waiting to board the train are not a latticework of equals with interesting lives and valuable projects but rather stupid schmoes to be nudged and edged out and cut off. Students and employees are lazy complainers. Low-level staff are people who failed to achieve meaningful careers through their own incompetence who ought to take the scut work and clean up the messes. (If he is in a low-level position, it's a just a rung on the way up or a result of crimes against him.)
Inconveniencing others tends not to register in the jerk's mind. Some academic examples drawn from some of my friends' reports: a professor who schedules his office hours at 7 pm Friday evenings to ensure that students won't come (and who then doesn't always show up himself); a TA who tried to reschedule his section times (after all the undergrads had already signed up and presumably arranged their own schedules accordingly) because they interfered with his napping schedule, and who then, when the staffperson refused to implement this change, met with the department chair to have the staffer reprimanded (fortunately, the chair would have none of it); the professor who harshly penalizes students for typos in their essays but whose syllabus is full of typos.
These examples suggest two derivative features of the jerk: a tendency to exhibit jerkish behavior mostly down the social hierarchy and a lack of self-knowledge of how one will be perceived by others. The first feature follows from the tendency to treat people as objects to be manipulated. Manipulating those with power requires at least a surface-level respect. Since jerkitude is most often displayed down the social ladder, people of high social status often have no idea who the jerks are. It's the secretaries, the students, the waitresses who know, not the CEO. The second feature follows from the limited perspective-taking: If one does not value others' perspectives, there's not likely to be much inclination to climb into their minds to imagine how one will be perceived by them.
In considering whether you yourself are a jerk, you might take comfort in the fact that you have never scheduled your office hours for Friday night or asked 70 people to rearrange their schedules for your nap. But it would be a mistake to comfort oneself so easily. There are many manifestations of jerkitude, and even hard-core jerks are only going to exhibit a sample. The most sophisticated, self-delusional jerks also employ the following clever trick: Find one domain in which one's behavior is exemplary and dwell upon that as proof of one's rectitude. Often, too, the jerk emits an aura of self-serving moral indignation -- partly, perhaps, as an anticipated defense against the potential criticisms of others, and partly due to his failure to think about how others' seemingly immoral actions might be justified from their own point of view.
The opposite of the jerk is the sweetheart or the sweetie. The sweetie is vividly aware of the perspectives of others around him -- seeing them as individual people who merit concern as equals, whose desires and interests and opinions and goals warrant attention and respect. The sweetie offers his place in line to the hurried shopper, spends extra time helping the student in need, calls up an acquaintance with an embarrassed apology after having been unintentionally rude.
Being reluctant to think of other people as jerks is one indicator of being a sweetie: The sweetie charitably sees things from the jerk's point of view! In contrast, the jerk will err toward seeing others as jerks.
We are all of us, no doubt, part jerk and part sweetie. The perfect jerk is a cardboard fiction. We occupy different points in the middle of the jerk-sweetie spectrum, and different contexts will call out the jerk and the sweetie in different people. No way do I think there's going to be a clean sorting.
------------------------------------------------------- I'm accumulating examples of jerkish behavior here. Please add your own! I'm interested both in cases that conform to the theory above that those that don't seem to.
With a few mouse clicks, I give her a mate -- a man who has woken on a nearby part of the island. The two meet. I have set the island for abundance and comfort: no predators, no extreme temperatures, a ready supply of seeming fruit that will meet all their biological (apparently biological) needs. The man and the woman talk -- Adam and Eve, their default names. They seem to remember no past, but they find themselves with island-appropriate skills and knowledge. They make plans to explore the island, which I can arbitrarily enlarge and populate.
Since Adam and Eve really are, by design, rational and conscious and capable of the full human range of feeling, the decision I just made to install them on my computer was as morally important as was my decision fifteen years ago to have children. Wasn't it? And arguably my moral obligations to Adam and Eve are no less important than my moral obligations to my children. It would be cruel -- not just pretend-cruel, like when I release Godzilla in SimCity or let a tamagotchi starve -- but really, genuinely cruel, were I to make them suffer. Their environment might not seem real to me, but their pains and pleasures are as real as my own. I should want them happy. I should seek, maybe, to maximize their happiness. Deleting their files would be murder.
They want children. They want the stimulation of social life. My computer has lots of spare capacity. Why not give them all that? I could create an archipelago of 100,000 happy people. If it's good to bring two happy children into the world, isn't it 50,000 times better to bring 100,000 happy island citizens into the world -- especially if they are no particular drain upon the world's (the "real world's") resources? It seems that bringing my Archipelago to life is by far the most significant thing I will ever do -- a momentous moral accomplishment, if also, in a way, a rather mundane and easy accomplishment. Click, click, click, an hour and it's done. A hundred thousand lives, brimming with joy and fulfillment, in a fist-sized pod! The coconuts might not be real (or are they? -- what is a "coconut", to them?), but their conversations and plans and loves have authentic Socratic depth.
By disposition, my people are friendly. There are no wars here. They will reproduce to the limit of my computer's resources, then they will find themselves infertile -- which they experience as somewhat frustrating but only one small disappointment in their enviably excellent lives.
If I was willing to spend thousands on fertility treatments to bring one child into the world, shouldn't I also be willing to spend thousands to bring a hundred thousand more Archipelagists (as I now call them) into the world? I buy a new computer and connect it to my old one. My archipelago is doubled. What a wealth of happiness and fulfillment I have just enabled! Shouldn't I do even more, then? I have tens of thousands of dollars saved up in my childrens' college funds. Surely a million lives brimming with joy and fulfillment are worth more than my two children's college education? I spend the money.
I devote my entire existence to maximizing the happiness, the fulfillment, the moral good character, and the triumphant achievements of as many of these people as I can make. This is no pretense. This is, for them, reality, and I treat it as earnestly as they do. I become a public speaker. I argue that there is nothing more important that Earthly society could do than to bring into existence a superabundance of maximally excellent Archipelagists. And as a society, we could easily create trillions of them -- trillions of trillions if we truly dedicated our energies to it -- many more Archipelagists than ordinary Earthlings.
Could there be any greater achievement? In comparison, the moon shot was nothing. The plays of Shakespeare, nothing. The Archipelagists might have a hundred trillion Shakespeares, if we do it right.
We face decisions: How much Earthling suffering is worth trading off for Archipelagist suffering? (One to one?) Is it better to give Archipelagists constant feelings of absolutely maximal bliss, even if doing so means reducing their intelligence and behavior to cow-like levels, or is it better to give them a broader range of emotions and behaviors? Should the Archipelagists know conflict, deprivation, and suffering or always only joy, abundance, and harmony? Should there be death and replacement or perpetual life as long as computer resources exist to sustain it? Is it better to build the Archipelagists so that they always by nature choose the moral good, or should they be morally more complex? Are there aesthetic values we should aim to achieve in their world and not just morality-and-happiness maximizing values? Should we let them know that they are "merely" sims? Should we allow them to rise to superintelligence, if that becomes possible? And if so, what should our subsequent relationship with them be? Might we ourselves be Archipelagists, without knowing it, in some morally dubious god's vision of a world it would be cool to create?
A virus invades my computer. It's a brutal one. I should have known; I should have protected my computer better with so much depending on it. I fight the virus with passion and urgency. I must spend the last of my money, the money I had set aside for my kidney treatments. I thus die to save the lives of my Archipelagists. You will, I know, carry on my work.
The Philsosophers' Carnival, as you probably know, posts links to selected posts from around the blogosphere, chosen and hosted by a different blogger every month. Since philosophers are just a bunch of silly children in grown-up bodies, I use a playground theme.
The Metaphysical Whirligig: All the kids on the playground know who Thomas Nagel is. He's the one riding the Whirligig saying he has no idea what it's like to be a bat! Recently, he's been saying something about evolution and teleology that sounds suspiciously anti-Darwinian. But maybe most of us are too busy with our own toys to read it? Peter at Conscious Entities has a lucid and insightful review (part one and part two). Meanwhile, Michael McKenna at Flickers of Freedom is telling us that "free will" is just a term of art and so we can safely ignore, for example, what those experimental philosophy kids are doing, polling the other kids in the sandbox. Whoa, I'm getting dizzy!
The Philosophy of Mind Sandpit: Some of the kids here are paralyzed on one side of their body, and they don't even know it. How sad! They grab their toys only from one side and the toys tumble out of their hands. Glenn Carruthers at Brains muses about what these anosognosics' lack of self-knowledge really amounts to. I like the nuance of his description, compared with the black-or-white portrayals of anosognosias some of the philosophy kids offer.
The Curving Tunnel of Philosophy of Language: Wolfgang Schwarz is looking down the tunnel at a single red dot, viewed through two tubes, one for each eye -- but he doesn't know it's only one dot! What he really sees, Wo says, is just another Frege case, nothing requiring centered worlds, contra David Chalmers. In the comments, Chalmers responds. Meanwhile Rebecca Kukla and Cassie Herbert are listening to what the philosophy kids are whispering to each other on the side in "peripheral" forums, like blogs. Why are the boys getting all the attention?
The Epistemic Slide: Some children stand at the top of the slide, afraid to go down and ruining the fun for everyone. Me for instance! I remain unconvinced that Hans Reichenbach or Elliott Sober have satisfactorily proven that the external world exists.
The Moral Teeter-Totter: At Philosophy, Et Cetera, Richard Chappell is scolding those fun-loving up-and-down moral antirealists: Though they might think they can accept all the same first-order norms as do moral realists, they can't. Concern for others, is for antirealists, just too optional. Anti-realists thus fail to see people as really mattering "in themselves". Are the antirealist children hearing this? No! They plug their ears, sing, and keep on endorsing whatever norms they feel like! At the Living Archives on Eugenics Blog, Moyralang discusses a fascinating case of parents trying to force a surrogate mother to abort her disabled baby. Some children just can't play nice with the special needs kids.
The Philosophy of Science Picnic Table: See that kid sitting at the table with a winning lottery ticket? Why is she crying? At Mind Hacks, Tom Stafford gives a primer on research that money won't buy happiness. Meanwhile, the kids at Machines Like Us are gossiping about a new study suggesting that a large proportion of neuroscience research is unreliable due to small sample size. And that girl at the picnic table with the iPad? She's just seeing what Google autocompletes when you enter "women can't", "women should", "women need", vs. "men can't", "men should", "men need", etc. Nothing interesting there for us boys, of course!
The Historical Jungle Gym: Steve Angle, at Warp, Weft, and Way -- what have you just put in your mouth?! Steve argues against PJ Ivanhoe's interpretation of the Confucian tradition as treating the moral skill as a kind of connoisseurship, like cultivation of taste in wine (or bugs). After all, even the poorly educated know that bugs taste bad!
The Metaphilosophy Spiderclimb: What have all these children learned, really? Not much, maybe! Empty boasting might be the order of the day. Joshua Knobe at Experimental Philosophy pulls together the existing empirical evidence on philosophical expertise.
The Issues in the Profession Nanny: Why aren't there more children on the equipment, you might wonder? So do I! It turns out they're wasting all their time applying for grants! Shame on them, says playground watcher Helen De Cruz at NewAPPS -- or rather, shame on the system. Children should be playing and jumping and throwing sand at each other, not forced to spend all their time hunting around in the grass for nickels. Meanwhile, Janet Stemwedel at Adventures in Ethics & Science tells a nice anecdote about the philosophy boys' cluelessness about the prevalence of sexual harassment -- still! But I know you philosophy blog readers won't be so ignorant, since you've been keeping up with the steady wave of shockers over at What Is It Like to Be a Woman in Philosophy?.
The next Philosophers' Carnival will be hosted by Camels with Hammers.
I got into philosophy to think and to read and to teach -- to write articles and blog posts, to meet with grad students, to put together engaging classes for undergrads, to argue with colleagues. That's what I want to do with my time. What I don't want to do is spend lots of time applying for grant money.
And society should feel the same way. I'm an employee of the University of California, my salary funded by taxpayers. Taxpayers want me to teach. Taxpayers should want me to do research too -- if for no other reason than to make me the kind of leading scholar who can teach cutting-edge classes. But taxpayers should not be paying me to spend large amounts of time writing down stuff to convince some committee that I deserve money more than Professors X, Y, and Z deserve money. The amount of time academics spend applying for grants is a giant, loathsome waste of energy of some of the most capable minds in the world.
Plus, why should we want to tie researchers down to what they thought they wanted to do two years ago, when they applied? Times change, ideas mature, opportunities arise!
But society has to fund research, right? So there need to be grants out there to support researchers.
The solution is simple: Give money to researchers for their research without their having to apply, and let them spend it on any reasonable research expenses. That should be the dominant model of grant funding. The committees that award grant money should spend their time finding out who, based on recent performance, is likely to put research money to best use, and they should simply hand those people the money. This will free up the time of those people to do more of their interesting research. Think MacArthur "genius" grants on a small scale.
A few strings should be attached, so that recipients don't just pocket the money as salary. Suppose you're a foundation with $1 million a year to fund philosophical research on Topic X. Here's how you might do it. Form a committee of leading scholars on Topic X. Have them find 40 people who are actively doing excellent work on Topic X -- from post-docs through distinguished professors, some at every level. And then send each of the potential recipients a letter offering them $25,000 over the course of five years, with the following two conditions: (1.) The money be spent only on documented research expenses (provide a list of allowable expenses), and (2.) In the last year they receive money, they come to your annual conference to present some of their research on Topic X. (Right, you now have to host an annual conference on Topic X, where your brilliant researchers can argue with each other. That seems like a good idea anyway, doesn't it?)
(Or make it $10,000 to 100, or $100,000 to 10, or whatever -- depending on the committee's vision.)
A good committee (maybe eventually composed of past recipients) can easily identify a good pool of leading active researchers on Topic X. Look at who is publishing; look at who is presenting at conferences; etc. Those are the people to fund. They want to travel; they want to buy sabbatical time to be able to focus better on their research; they want to spend money on books and equipment and graduate student research assistants. I predict that the committee would end up funding better research overall, with less waste -- at least in philosophy -- than if they wait for applicants to come to them, funding on the basis of shiny-tight-looking proposals.
If the committee is especially interested in encouraging certain sorts of activities, the committee could also offer some funding contingent on executing those activities: $15,000 of research money if the recipient is willing to use it to organize a mini-conference, $4,000 bonus research money if the recipient speaks at four universities in continental Europe, whatever.
To counteract some of the elitism inherent in the proposal, the committee should be especially directed to look for researchers with large teaching loads and non-elite appointments who are able to be research-productive nonetheless. Such people especially deserve funding, and they might be especially likely to put their funding to good use.
I don't think this is the only way research should be funded. The standard model can still play a role: People with no brilliant track record, or who have been overlooked by the committee, should be funded if they can put together excellent proposals. Some people might have especially ambitious visions requiring sums of money larger than the usual amounts. For these cases -- but, I would argue, only for these cases -- the standard granting model makes sense.
(Templeton, are you listening?)
Update March 19:
In the comments, Neil reminded me about people whose salary is paid for by grants. This can easily be added to my system. One way is contingent directed funding: Offer some recipients $N of funding for a post-doc if they are willing to do the search and supervision. Another way is to have people apply for either renewable or non-renewable salaried grants, much as they might apply for a job. For the renewable ones, continuation would be based on actual performance during the lifetime of the grant.
Update April 9:
See Helen De Cruz's excellent post on this at NewAPPS, too.
It has something to do with Zhuangzi's humor and Laozi's self-seriousness, when they say strange things.
Compare them on death. First Zhuangzi:
When Chuang-tzu was dying, his disciples wanted to give him a lavish funeral. Said Chuang-tzuOf course Zhuangzi's disciples will bury him. They're not going to throw his corpse under a tree! He's razzing them, using the occasion of his death to make a joke -- a joke with a point, of course. In fact, the joke has at least three points: the surface point of challenging the burial traditions taken so seriously by most of his contemporaries, but also points conveyed by his mood and tone -- rejecting solemnity and negativity about death, and undercutting his disciples' attempts to revere him. Challenging tradition, refusing to be unsettled by death, and undermining his own authority are all central themes in Zhuangzi. They come together so nicely here in a crisp joke! Although this fragment isn't from the Inner Chapters, it's a perfect slice of Zhuangzi.
'I have heaven and earth for my outer and inner coffin, the sun and moon for my pair of jade discs, the stars for my pearls, the myriad creatures for my farewell presents. Is anything missing from my funeral paraphernalia? What will you add to these?'
'Master, we are afraid that the crows and the kites will eat you.'
'Above ground, I'll be eaten by the crows and kites; below ground, I'll be eaten by the ants and molecrickets. You rob the one of them to give to the other; how come you like them so much better?' (Graham, trans.)
Now Laozi on death:
To be courageous in daring leads to death;No jokes here! (Or anywhere in the Daodejing.) Laozi is dispensing some serious advice: To be courageous in daring leads to death but to be courageous in not daring leads to life. Wait, "to be courageous in not daring"? What does that mean? Hm, maybe Laozi is advising us to avoid battle even if it means facing scorn? Or at least he's saying that doing so is likelier to preserve your life? Well, no surprise there! Naw, the passage can't be that vapid, can it?
To be courageous in not daring leads to life.
These two bring benefit to some and loss to others.
Who knows why Heaven dislikes what it does?
Even sages regard this as a difficult question.
The Way does not contend but is good at victory;
Does not speak but is good at responding;
Does not call but things come of their own accord;
Is not anxious but is good at laying plans.
Heaven's net is vast;
Its mesh is loose but misses nothing. (Ivanhoe, trans.)
The text continues: "These two bring benefit to some and loss to others." Okay, and...? For a moment, there seems to be a bit of self-doubt. He can't say why. I can almost hear the relief in his voice, though, when he says that even the sages find these questions difficult; there's no real threat to his self-esteem, even if he can't figure it out! And within two lines all is better, with Laozi back to the usual profound paradoxicalizing he seems to find so comfortable: "The Way does not contend but is good at victory", etc.
Laozi sounds so deep! But it is exactly this seeming-profundity I mistrust. It's easy to invent profound-seeming inversions. "The voice that speaks loudest is the one that that is most quiet. The Way is largest in its being tiny. The basketball that misses the hoop is the one that truly goes in." Try ten more as an exercise at home. See, anyone can do it! Almost reflexively, the reader responds with attempts to see the deep sense in such remarks: Is Schwitzgebel saying that one gains more from failure in basketball than from success? Or is he saying that, in life, we should most admire the nothing-but-net swish shot that doesn't even touch the hoop? Or...? Wow, it's so multi-dimensionally profound it can't fully be articulated!
So we come up against the limits of language, or at least seem to. Let's compare Laozi and Zhuangzi on that issue. Here's the famous opening passage of Laozi's Daodejing:
A way that can be followed is not a constant WayJust in case you couldn't tell from the profound-seeming reversals, you are also told explicitly: This is enigmatic! In fact, it's an enigma within an enigma! And this book is your gate to all that.
A name that can be named is not a constant name.
Nameless, it is the beginning of Heaven and Earth;
Named, it is the mother of the myriad creatures.
Always eliminate desires in order to observe its mysteries;
Always have desires in order to observe its manifestations.
These two come forth in unity but diverge in name.
Their unity is known as an enigma.
Within this enigma is yet a deeper enigma.
The gate of all mysteries! (Ivanhoe, trans.)
I admit it, Laozi makes me crabby. Probably I'm too uncharitable in reading him, but I think he's a poser. "Here, I've got secrets. Secrets within secrets, even! Too profound for words! If you're really in tune with the Dao, though, reader, you can start to fathom my depths. If anything I say seems silly or wrong, it's either your fault or the inherent limitations of language."
Contrast Zhuangzi on the limits of language:
Now I am going to make a statement here. I don't know if it fits into the category of other people's statements or not. But whether it fits into their category or whether it doesn't, it obviously fits into some category. So in that respect it is no different from their statements. However let me try making my statement.Now, some interpreters (prominently, A.C. Graham) seem to think Zhuangzi is offering here a serious theory of not-yet-beginning-to-be-nonbeing. Really? It seems so clearly to me to be a parody! It wouldn't be the only parody in the Inner Chapters -- not by a long shot. Zhuangzi is gently mocking his friend the paradoxical logician Huizi and other philosophers advancing abstract general theories -- including maybe the folks who were putting together the Daodejing. But I don't see the humor here as mean-spirited or superior in tone; he brings his own language and theories within the umbrella of his mockery. He too finds his words collapsing around him, suspects his criticism applies to himself as much as it does to others. Once again, Zhuangzi undercuts himself where Laozi hypes himself. At least that's how I read it.
There is a beginning. There is a not yet beginning to be a beginning. There is a not yet beginning to be a not yet beginning to be a beginning. There is being. There is nonbeing. There is a not yet beginning to be nonbeing. Suddenly there is being and nonbeing. But between this being and nonbeing, I don't really know which is being and which is nonbeing. Now I have just said something. But I don't know whether what I have said has really said something or whether it hasn't said something. (Watson, trans.)
And to me, that difference in tone is all the difference in the world. A philosopher who says weird paradoxical things while undercutting those very things with humor and self-criticism is a very different philosopher from one who might say some superficially very similar-sounding weird paradoxical things while loudly insisting upon his own unfathomable profundity.
You hereby also further agree that reading The Splintered Mind is a risky activity that might result in false beliefs, dangerous lemmas, despair, loss of religion, adoption of a false region, injury, death, insanity, in-between attitudes, ill-advised pragmatism, philosophical error, the sudden kindling of prurient desires, bizarre and uncontrollable thoughts, hatred of small cuddly kittens, up to and including condemnation to eternal torment; and that there are further risks, some known but intentionally held secret from you and some neither specified nor known. By reading this far (but even before reading this far), you have waived all of your rights in every jurisdiction, not only in the actual world but also in all possible and impossible worlds, whether distant, proximate, or entirely absurd, to any sort of action whatsoever and leave it entirely to the discretion of SM to treat you in any way they deem fit or unfit, without necessity of justification or defense.
If any portion of this contract is found void or unenforceable, the remaining portions shall remain in full force and effect.
This contract shall be binding upon all persons, non-human animals, aliens, group minds, and other entities and processes that have any casual contact or quantum-mechanical entanglement with SM, forward, backward, or sideways in time, mediated or unmediated, of any form whatsoever, whether they read this statement or not.
The homunculi reproduce as follows. At night, while a person is sleeping, a female homunculus lays one egg in each of the victim's tear ducts. The eggs hatch and minute worms wiggle into the victim's brain. As the worms grow, they consume the victim's neurons and draw resources from the victim's bloodstream. Although there are some outward changes in the victim's behavior and physiological regulation, these changes are not sufficiently serious to engender suspicion in the victim's friends; the homunculi are careful to support the victim's remaining neural structure, especially by sending out from themselves neural signals similar to what the person would have received had their brain tissue not been consumed. The victim reports no discomfort and suspects nothing amiss.
Each growing homunculus consumes one hemisphere of the brain. Shared neural structures they divide equally between themselves. They communicate by whispering in a language much like English, but twenty times as fast -- much less inter-hemispheric information transfer than in the normal human brain, of course, but as commissurotomy cases show, massive information transfer between the hemispheres is not essential to most normal human behavior. Any apparent deficits are masked by a quick stream of speech between the homunculi, and unlike hemispheric specialization in the human brain, both homunculi receive all inputs and have joint control over all outputs.
Two months later, the person is a two-seater vehicle for brother and sister homunculi. An internal screen of sorts displays the victim's visual input to both of the homunculi; through miniature speakers the homunculi hear the auditory input; tactile input is fed to them by dedicated sensors on their own limbs, etc. They control the victim's limbs and mouth by joint steering mechanisms. Each homunculus is as intelligent as a normal human being, though operating an order of magnitude more quickly due to their more efficient brains (carbon based, like ours, but operating on much different internal principles). When the homunculi disagree about what to do, they quickly negotiate compromises and deferences. When fast reactions are needed, behavior defaults to pre-negotiated compromises and deferences.
But what is it like for Bill? There is no Bill anymore, maybe, though he didn't notice his gradual disappearance. Or maybe there still is Bill, despite the radical change in his neurophysiology? How many streams of experience are there? Two? One for each homunculus but none for Bill? Or one stream only, for the two homunculi, if they are well enough integrated and fast-enough communicating? (How well and quickly integrated would they have to be to share a single stream?) Three streams? One for each homunculus plus one, still, for Bill? Two and a half? One for Bill and one and a half for the two partly-integrated homunculi? Or when counting streams of experience, must we always confine ourselves to whole numbers?
Bill always liked sushi. He never lost that preference, I think. Neither of the homunculi would want to put sushi in their own mouths, though. Bill loved his wife, the Lakers, and finding clever ways to save money on taxes. Bill can still recite Rime of the Ancient Mariner by heart, though neither of the homunculi could do it on their own. When he sees the swing in his backyard, Bill sometimes calls up a fond and vivid memory image of pushing his son on the swing, years ago. When the homunculi consumed his brain, they preserved the information in this memory image between them -- and in many others like it -- and they would draw it up on their visual imagery screen when appropriate. Characteristic remarks would then emerge from Bill, like "Maggie, do you remember how much Ethan loved to ride high on this swing? I can still picture his laughing face!" So spontaneous and natural does it seem to the well-entrenched homunculi to make such remarks, they told me, that they would lose all sense that they are acting. Maybe, indeed, they were no longer acting but really (jointly) became Bill.
I don't know why the homunculi thought I would not be alarmed upon learning all this. Maybe they thought that, as a crazy philosopher who takes literal group consciousness seriously, I'd think of the two-seater homunculus as merely an interesting implementation of good old Bill. But if so, their trust was misplaced. I snatched the homunculi, knocked them unconcious, and shoved them back inside Bill's head. I glued and stapled Bill's skull closed and consulted David Chalmers, my go-to source for bizarre scenarios involving consciousness.
None of what I said was news to Dave. He had been well aware of the homunculus infestation for some time. It had been a closely-held secret, to prevent general panic. But with the help of Christof Koch, a partial cure had just been devised, and news of the infestation was being disseminated where necessary to implement the cure.
The cure works by dissolving homunculi's own skulls and slowly fusing their brains together. Their motor outputs controlling the victim's behavior are slowly replaced by efferent neurons. Simultaneously, the remains of the homuncular bodies are slowly reabsorbed by the victim. At the end of the process, the victim's physiology is very different from what it had been before, but it is a single stream in a single brain, with no homuncular viewing screens or output controls and more or less the victim's original preferences, memories, skills, and behavior patterns.
All of this happened two years ago. Bill never knew the difference and remains happily married, though I had to tell a few white lies about his skiing accident to explain the scars.
In a forthcoming paper, Blake Myers-Schulz and I pick up a mostly-cold torch from Colin Radford (whose seminal work on this topic was in the 1960s) and challenge the belief condition. Can one know that something is the case even if one doesn't believe that it's the case? We offer five plausible cases (one adapted from Radford) along with empirical evidence that our intuitions [note 1] about these cases are not idiosyncratic.
This paper has already drawn several follow-up studies, some critical and some supportive -- but interestingly, even the critical studies can be read as contributing to an emerging consensus that problematizes the belief condition. (I don't predict the consensus will last. They never do in philosophy. But still!)
First, to give you a feel for it, our cases:
1. An unconfident examinee who feels like she is guessing the answer but non-accidentally gets it right;Now maybe not all the cases work, but we think in each case there's at least some plausibility to the thought that the person in question knows (that Queen Elizabeth died in 1603, that the bridge is closed, that athletic students are just as capable, that aliens won't come out of her faucet, that his wife is cheating) but does not believe -- at least not as fully and determinately as she knows. And lots of undergraduates seem to agree with us! So we think the B condition on knowledge should at least be open for discussion. It should not be regarded as uncontroversially nonproblematic.
2. An absent-minded driver who momentarily forgets that a bridge he normally takes to work is closed and continues en route toward that bridge;
3. A prejudiced professor, who intellectually appreciates that her athletic students are just as capable as her non-athletic students but who nonetheless is persistently biased in her reactions to student-athletes;
4. A freaked-out movie-watcher who seems to have the momentary impression that the scenario depicted in a horror film is real;
5. A self-deceived husband who has lots of evidence that his wife is cheating and some emotional responses that seem to reveal that he knows this, but who refuses to admit the truth to himself.
6. A religious fundamentalist geocentrist who aces her astronomy class -- seeming to know that Earth revolves around the sun but not to believe it.Although some of these follow-up studies are pitched as in agreement with us and others as critique, we think there's actually a pretty clear thread of consensus through it all, from a bird's-eye view:
Knowledge requires some sort of psychological connection to the justified, true proposition -- something broadly like a belief condition; but it doesn't seem to require full-on act-it-and-live-it-and-breathe-it belief. However reasonable it might be to think the Earth goes round the sun, that fact has to register with me cognitively in some way if I am to qualify as knowing it; but the fact needn't play the full functional role of belief as envisioned in behaviorally-rich accounts of belief like my own. But how exactly should we should conceptualize this somewhat weak but broadly beliefish psychological-connectedness condition? At this point, that's wide open.
Blake and I suggest that one must have the capacity to act on the stored information that P; Rose and Schaffer seem to suggest that what's crucial is that the information be "available to the mind"; Buckwalter and colleagues suggest that one must believe but only in some "thin" sense of belief; Murray and colleagues suggest that one need to be disposed to "assent" to the content. None of these approaches are well specified (and I've simplified them somewhat; apologies). Figuring out what's going on with the B condition thus seems like a potentially fruitful task that brings together core issues in epistemology and philosophy of mind.
Here's Reichenbach's diagram:
The inhabitants of this world, Reichenbach says, will eventually come to infer that something exists beyond the cubical boundary that causes the shadows on the ceiling and wall. So also likewise, he says, can we infer, from the patterns of relationship among our experiences, that something exists beyond those experiences, causing them.
It is crucial to Reichenbach's argument that the inhabitants of this world ("cubists", let's call them) infer the existence of something beyond the walls that is the common cause of the pairs of corresponding silhouettes. If the cubists could reasonably believe that only the shadows existed, with laws of relation among them, no external world would follow; and so correspondingly in the experiential case there might only be laws of relationship among our experiences with no external common cause beyond.
Unfortunately, it's obscure why Reichenbach thinks the cubists couldn't instead reach the conclusion that the shadows on the ceiling directly affect the shadows on the wall or vice versa, e.g., by the transmission of invisible and unblockable waves through the interior of the cube or simply by action at a distance. (In Reichenbach's mirror set-up, height has no influence on the bird's ceiling position but it does influence position on the wall; and the reverse holds for horizontal position; but direct-causers can posit hidden-variable explanations or similar.) Reichenbach addresses this worry with a single sentence: Within the confines of cubical world, he says, the cubists will have found that "Whenever there were corresponding shadow-figures like spots on the screen, there was in addition, a third body with independent existence", so they'll reasonably regard it as likely that the same is true on their walls (p. 123).
There are two serious problems with this response. First, it cannot be straightforwardly adapted to the sensory-experience/external-world case, which is of course the real aim of Reichenbach's argument. Second, it is false anyway: We can readily construct cases where one spot on a screen causes another on a separate screen without a common cause behind them, e.g. by using a mirror to reflect light from one screen onto another or by making the first screen sufficiently translucent and staging the second screen directly behind it; this is no less natural than the friendly ghost's arrangement.
Sober imagines sitting on the beach, noticing the correlation between visual experiences of waves breaking on the beach and auditory experiences of crashing waves. The two types of experience cannot be related as cause and effect because he can stop one while the other continues: When he closes his eyes he still hears the crashing; when he stops his ears he still sees the breakers. Presumably, then, there's a common cause of both.
So far, so good. But to establish an external world beyond the realm of experience, we must establish that this common cause is something outside the realm of experience. Sober responds to this concern by considering one solipsistic alternative: the intention to go to the beach. He then argues that this intention cannot serve as an adequate common cause because the visual and auditory experiences are correlated beyond what would follow simply from taking the intention into account. So he challenges the solipsist to produce a more adequate common cause. He suggests that this challenge cannot be met.
But it can be met! Or so I think. The common cause could be my first beach-like experience. This experience, whether auditory or visual or both, then causes subsequent beach-like experiences. That takes care of the correlation. If I have an experience as of closing my eyes, the auditory experience at time 1 causes the auditory experience at time 2 and also the visual experience at time 2 conditionally upon my having an experience as of opening my eyes; analogously if I stop my ears. The solipsist can either play this out with the first experience causing all the subsequent ones until conditions change, or she can have each experience cause the next in a chain. On the chaining version, if I have my eyes and ears simultaneously closed, my opinion that I will soon have beach-like experiences then does the causal work. (There are imperfections in these regularities, of course, e.g., I might seem to myself to have booted up an an audiorecording of waves, but to take advantage of those imperfections is contrary to the spirit of the toy example and would cause trouble for Sober's model too.)
I agree in spirit with what Reichenbach and Sober are trying to do -- and Bertrand Russell, and Jonathan Vogel. The most reasonable explanation of the patterns in my experience is that there is an external world behind those experiences. But the argument isn't quite as easy as it looks! That's why you need to read Alan Moore's and my paper on the topic. (Or for short blog-post versions of our arguments, see here, here, and here.)
Revised 4:42 pm.