• Shortcuts : 'n' next unread feed - 'p' previous unread feed • Styles : 1 2

» Publishers, Monetize your RSS feeds with FeedShow:  More infos  (Show/Hide Ads)

Date: Monday, 14 Apr 2014 08:30
I used to play poker. I don't much play anymore, mostly because I don't have any time or money, but also because people playing poker at casinos are no longer unskilled enough to for me to consistently make money.

Poker is a curious game. It is reasonably well-understood that in theory, "optimal" play in poker guarantees that (absent a rake) you will always break even in the long run, regardless of how well or poorly your opponent plays. Thus, to win at poker, you have to figure out how your opponent is playing non-optimally, and play in the corresponding non-optimal way that will give you positive odds. For example, many inexperienced players play too loosely and passively — they play too many hands that have negative odds, and they don't raise enough when they have positive odds. To take advantage of them, an experienced player will play tightly and aggressively, playing only hands with positive odds and almost always then raising.* Note that the "expert" is playing sub-optimally: "optimal" play requires playing marginally positive hands passively, and bluffing some hands with negative odds. But playing that way will simply break even in the long run, even against sub-optimal play. If the other players understand her tight-aggressive strategy, they will simply fold with mediocre hands whenever she raises.

*One fictional trope that I find amusingly unrealistic is the depiction of the "expert" player as one who can bluff his opponents into folding with better hands. In reality, the real expert will make a lot more money by convincing her opponents to call with worse hands; her bluffs are calculated to fail, to convince opponents to call more. Hence, she will usually show her opponents her (infrequent) successful bluffs and hide her winning hands (and hint they were bluffs).

Another thing that happens all too often in reality is experienced players getting upset with new players for playing poorly. Stupid! You want to
encourage poor play and take advantage of it. Poker is not a game of skill in calculating odds; it is a game of psychological observation and manipulation. If you can't manipulate a newbie, you are not even an intermediate player, much less an expert.

There is one situation in poker when playing correctly has positive odds: when you are no-limit heads-up (you can bet any amount you want, up to your total, and you playing against one opponent), and you have at least double the amount of money he has. Then you just go more or less all in on every hand*, counting on the fact that your opponent has to beat you twice, whereas you have to beat him only once. If the blind (forced opening bet) is large, and it is usually very large near the end of a tournament when the last two players are fighting for first place, your opponent can't wait for a good hand to call you.

*IIRC, the exactly correct strategy is to bet so that if you do lose, you equalize your and your opponent's stacks after the next blind.

If you go to a poker tournament, and you play the optimal strategy, you will lose the tournament. On average, you will neither win nor lose, but there will be a player who figures out the sub-optimal play of other players and takes advantage of them (or who plays sub-optimally, with fatter tails*, and gets lucky), and then beats you by brute force at the end.

*i.e. hey have a higher probability of either going broke quickly or getting rich quickly, with a lower probability of breaking even.

Finance is the same way. The "optimal" strategy is to create a portfolio such that no matter what happens in the economy, your investment earns, in the long run, the same rate as general economic growth (1-4% per year). You'll never* lose with this strategy, but neither can you win: you will never* become relatively wealthier than someone who plays a sub-optimal strategy and either outsmarts other investors, or who just gets lucky. And when any other player gets substantially more than you, he can beat you down by brute force simply by being irrational longer than you can stay solvent.

*Well, rarely; even an optimal strategy has a little room on the tails.

One charming bit of naivete I see in economists is the idea that economics is fundamentally about the optimal allocation of resources to maximize social production. In some abstract theoretical universe this might be true, but in reality, economics is about power; it is about winning. Hence, people, especially people with a lot of money, are not trying to not lose, they are trying to win. They are trying to defeat their opponents. Hence, they cannot play conservatively, i.e. not to lose; even if their strategy is just naively sub-optimal, there are enough other people that some of them will get lucky and win big. The "conservative" strategy cannot (rarely) win big; the whole point is to balance big losers against big winners, and small losers against small winners. This asymmetry becomes even more pronounced in finance, because the big winners get to actually change the rules. Most notably, the rich socialize their own losses and privatize their gains. This tendency causes the financial market to crash (since the rich have no downside risk, they can make bets with negative long-term odds but the potential for short-term gains). Everyone, even the "conservative" investor, loses everything (or at least all his gains), and then state makes up the losses of the rich. ("Sure got a nice economy there. Be a shame if it were to burn down.")

The typical capitalist apologetic for this system is that it promotes innovation. The apologetic is half right. Unlike poker, real life has not only risk, but uncertainty. We have to make wildly speculative bets to create fundamentally new things. Everyone laughs at Pets.com, but whoda thunk that a search engine and a discount bookstore would be the primary drivers of internet technology, productivity, and economic growth? The probability of any individual speculation paying off is so low, according to the apologetics, that the payoff must be correspondingly large, and that we cannot punish failure by economic "death." Otherwise, no one would ever take speculative bets, and we would have no (or very little) innovation.

However, there are two flaws in the capitalist apologia. First, the reward for winning speculative bets is not increased consumption (Bill and Melinda Gates and their family could not and have no intention of actually themselves consuming $50 billion of goods and services), but political power: the power to tell people (i.e. the workers) what to do and not to do.* There is no need to reward successful innovators with political power; there are plenty of alternatives, such as social status.

*It you think that power is actually held by our elected representatives in the official government, you are hopelessly naive.

Either people "naturally" (i.e. without special, artificially constructed incentives) want to be innovative, or they do not. (More generally, they might or might not want the fruits of innovation more than they dislike the process of innovation.) If we do not naturally want to be innovative, why should we as a society encourage such behavior? If we naturally do want to be innovative, then instead of creating powerful positive incentives, it is sufficient to only remove negative incentives: do not punish people for trying to innovate and failing.

(The other apologia for capitalism is that most people are stupid, lazy, and irrational; they must be ruled for their own good. "Democracy" is at worst a sham and at best just a check on the most egregious corruption of the elite. As a democrat, I reject this premise, for what I think are good reasons.)

That's why I am coming to believe that finance should be entirely public, run by the government. The government can afford to play conservatively, i.e. to play to not lose. Most of our economy, the economy of food, clothes, houses, electricity, water, cars, gasoline, etc., is a game we want to play to not lose. For the rest, encourage small-scale innovation by removing the negative incentive of losing a year's pay trying to innovate: give everyone a free year to try something innovative (a person would have to prove only that she is not going to sit around for a year watching TV); if they're successful, give them some publicity and another year or two to continue being innovative. If they're unsuccessful, they've lost nothing. For the "big bets," innovations that are beyond the scope of the individual or small group, we should vote; why should a private person individually decide how to innovate with the labor of thousands or millions?

Such a society might not be as innovative as full-throttle innovate-or-die capitalism, but I think it will still have substantial innovation and would definitely be a happier society for everyone.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "communism and socialism, democratic comm..."
Send by mail Print  Save  Delicious 
Date: Friday, 11 Apr 2014 09:04
I saw "What Atheism Really Means" by Mike Dobbins when it came out last month. I chuckled and moved on because Dobbins makes a pointless and irrelevant distinction. But then 3quarksdaily picked it up, so I suppose the editors there are as ignorant as Dobbins about basic philosophy. In his article, Dobbins argues that the definition of atheism as lack of a [positive] belief in God is insufficient, and argues that the stronger definition as disbelief in God is more appropriate. However, Dobbins' objection is irrelevant, because it ignores or conflates different social contexts where various definitions of atheism operate: prosaic, philosophical, and political.

In an prosaic social context, I am happy to use Dobbins' stronger definition: I definitely say that I disbelieve in the existence of god. In this context, I am using the social definition of "god": the sort of being that characters such as Yahweh, Jesus*, Allah, Krishna, the Buddha*, Ngai, etc. purportedly represent. None of these entities actually exist; I believe that these characters are fictional on the basis of evidence and reason. I might be mistaken, of course, but I definitely do believe, and I would rationally defend that belief, that they do not actually exist. In a prosaic context, I agree with Dobbins: the facts warrant a statement of definite disbelief.

*To the extent that explicitly deistic attributes are essential to these characters. In a similar sense, the character of Abraham Lincoln in Benjamin P. Thomas's Abraham Lincoln: A Biography represents a real person, whereas the character of Abraham Lincoln (Benjamin Walker) in the film, Abraham Lincoln: Vampire Hunter is fictional.

However, things get a lot more complicated when philosophers consider an idea. Many atheists, myself included, have studied a considerable amount of philosophy, and there are many philosophers who have examined and defended atheism at the highest professional academic level. In a philosophical context, the precise meaning of words becomes critically important; the unqualified word, "god," becomes unacceptably ambiguous. The sense noted above, beings like Yahweh, etc., i.e. beings with personality, desires, preferences, and who intervene in the physical world to effect their will, is only one sense. There is also the deistic god, a god who sets the world in motion with a set of physical laws and then does not intervene further. This sort of god is not so much disbelieved as dismissed. While it would be nice to know, even if such a god existed, it would have so little impact on my daily life that in the absence of any evidence (even if such evidence could be adduced) deciding one way or the other is a waste of time. Finally, there are the "gods" of Sophisticated Theology™. For example, Jerry Coyne (who reads Sophisticated Theology™ so I don't have to), quotes David Bentley Hart's book, The Experience of God: Being, Consciousness, Bliss:
To speak of “God” properly, then . . . is to speak of the one infinite source of all that is: eternal, omniscient, omnipotent, uncreated, uncaused, perfectly transcendent of all things and for that very reason absolutely immanent to all things. God so understood is not something poised over against the universe, in addition to it, nor is he the universe itself. He is not a “being,” at least not in the way that a tree, a shoemaker, or a god is a being; he is not one more object in the inventory of things that are, or any sort of discrete object at all.
It seems clear that Hart's definition of god is not the sort of... concept?... that I can have any belief one way or another regarding existence. To be philosophically rigorous, the stronger, definite statement of disbelief is too narrow to encompass all these different definitions of "god"; the broader, and admittedly weaker, definition of "atheism" as a lack of positive belief succinctly covers all these cases.

In addition to ambiguities in the meaning of "god," there are also ambiguities and subtleties in the word "believe." In a philosophical sense, a person can believe or disbelieve only propositions, i.e. statements that can coherently be either true or false. (Philosopher Theodore Drange explores this concept in some depth in his 1998 article, "Atheism, Agnosticism, Noncognitivism.") If "God exists" is a proposition, then I can definitely disbelieve it. However, if "God exists" is not a proposition, as Hart seems to claim, then I can neither believe nor disbelieve it. In a similar sense, I can neither believe nor disbelieve the statement, "Colorless green ideas sleep furiously," nor can I believe or disbelieve emotitive sentences such as "Yay!" or "Boo hoo!" Again, confronted with a vast range of ways that theists present the propositional status of "God exists," I can be both precise and compact only by asserting that I lack a positive belief about the existence of God.

In addition to senses of god that are not propositions, there are senses that are propositional but cannot be known. To illustrate this principle, consider the statement, "There is [present tense] a ninja hiding in the room." First, this statement is hard to prove: ninjas are, by definition, far more skilled at hiding than I am at detecting them. More importantly, though, even if I discover a ninja in the room, he or she is ipso facto no longer hiding. Neither discovering nor failing to discover a ninja, therefore, is evidence for or against the proposition. While the statement is propositionally, semantically, and even scientifically unproblematic, it is fundamentally unknowable by definition. While I might be able to come to a definite belief on indirect evidence (it seems unlikely that any ninja would want to hide in my office), if I am going to be rigorous (or if I am considering a statement where indirect evidence is unavailable), I have to simply deny any belief.

Another philosophical subtlety comes from the way that scientific naturalists such as myself view knowledge. First, in the scientific naturalist account, without exception, all knowledge — i.e. all propositional statements about reality — is always provisional. All knowledge is conditioned on evidence, and any individual human as well as all human society, has at any time only a small, finite subset of the very large and possibly infinite body of available evidence, and all knowledge, therefore, is subject to revision given new evidence. Because all knowledge is provisional, it's unnecessary to explicitly condition knowledge statements with provisionality. The sentence, "I believe (or know) that two bodies experience an attraction described by general relativity, which can be closely approximated at low densities as a force proportional to the product of the masses divided by the square of the distance," does not gain any additional meaning by adding provisos noting that further evidence might change my opinion. Because there are no statements about reality that are believed non-provisionally, we don't need to distinguish between provisional and non-provisional beliefs, and the linguistic distinction is dropped as redundant. In my own writing, I try to avoid the word, "certainly," replacing it with "definitely," but my vocabulary was shaped by convention, not scientific rigor, so I occasionally err. To the obtuse or unaware, unconditioned statements about knowledge sometimes appear to be stating facts with certainty rather than definiteness. Thus, even towards conceptions of gods that I disbelieve, I definitely disbelieve, i.e. I have made a decision, but I do not certainly disbelieve.

A more important consideration, however, requires looking a little more deeply into how scientific naturalism works. Because all knowledge is provisional, it is always statistical, at least conceptually. (I have to egregiously simplify here, but I hope to capture an essential feature about scientific knowledge.) In a statistical model, we create a "null hypothesis," which represents a default belief about the world, and an "alternate hypothesis" which represents the negation of the null hypothesis. For example, I might say that the null hypothesis is that the average height of men in the United States equal to 179 cm, and the alternate hypothesis is that the average height is not equal to 179 cm; on average they are either shorter than or taller than 179 cm. Note that the null hypothesis is probably not precisely correct; even if the average height is very near 179 cm, it is probably not exactly 179.000000000 .. 000 cm (we can measure length very precisely). (This imprecision is not really problematic; close enough is close enough, and if I'm designing a car or a house, for instance, I don't need to know the average height to nanometer precision.) In addition to being not precisely correct, the null hypothesis is usually not directly provable, it is only disprovable. If I measure the height of 300* men, and find that their average height (sample mean) is 180 cm, with a standard deviation of 10 cm, then I know with about 95.8% confidence that the average height of all men is not 179 cm. Note that I do not know that the average height of all men is 180 cm; I have "proven" (provisionally) only the alternate hypothesis, which is that the average height is not 179 cm. The best I can say is that I have good evidence for now considering 180 cm to be the new null hypothesis when talking about the height of American men.

At this sample size, the different between the normal and t distributions is negligible.

This method impels a curious terminology that any competent professor of statistics will impress on her students: you say you reject the null hypothesis or you fail to reject the null hypothesis; you do not, on pain of durance vile, ever say you accept the null hypothesis. Similarly, you say you have sufficient evidence to conclude that the alternate hypothesis is true, or you have insufficient evidence to conclude the alternate hypothesis is true; you never say you conclude that the null hypothesis is true. Strictly (very strictly) speaking, therefore, a scientific naturalist never actually believes the currently specified description of the world, a systematic collection of null hypotheses; she believes, instead, that she has insufficient evidence to conclude that the world is different from this current specification.

Note that "insufficient evidence" applies equally to edge cases as well as to non-edge cases. In the above example, if I had measured the height of only 250 men, I would be only 94.3% confident that I can reject the null and conclude the alternate hypothesis (that the average height was not 179 cm) was true. Because by convention I will reject only if I am 95% confident, I will fail to reject the null and conclude that I have insufficient evidence to conclude that the average height is not 179 cm. Similarly, if I find the average height of my sample to be 179.1 cm, I will be only 56.2% confident that the null is false, but I will still just say that I have insufficient evidence. (If I measured 30,000 men, however, a sample average of 179.1 would give me 95.8% confidence to reject the null.)

In practice when we repeatedly test and fail to reject some specification of the world, especially when our failure to reject is not borderline, we have good reason to believe the world really is at least very close to the specification. Still, when pressed, and in ambiguous or uncertain circumstances, scientific naturalists tend to retreat to "insufficient evidence" semantics.

I hope you'll forgive me, gentle reader, when I tell you we atheists really don't care that much anymore about the philosophical subtleties I have wasted so much of your time describing to you. As far as most atheists are concerned, the philosophical and scientific debate is over, decided. No matter what definition of "god" you choose (that is not intentionally metaphorical nor does unacceptable violence to the meaning of the word, "god"), your definition is meaningless, non-propositional, unknowable, or rejected by the evidence. We make a nod to the philosophical subtleties by making the most general statement — we lack a positive belief about god, which includes disbelief in some definitions of "god" — when concision is more important than detail.

Atheism is not primarily a philosophical position; it is a political position. Our position is that all god talk (that is not intentionally metaphorical) is not just nonsense, but pernicious nonsense. Religion is not just a weird thing that some people do in private; it has profoundly negative effects on our societies, cultures, and nations (and what positive effects it might have would be at least as good, and usually better, if the god talk were eliminated). We define atheism broadly not just as a nod to the philosophical subtleties, but also to be as inclusive as possible to people who reject god talk for a variety of reasons, with various degrees of philosophical sophistication. We want to include as an "atheist" someone who is not particularly interested in philosophy, who just doesn't know whether or not Yahweh and Jesus are real, but who finds offensive and absurd, as we do, the notion that, for example, the leader of an organization of supposedly celibate men, an organization that has gone out of its way to protect and defend men they know have raped children, has anything whatsoever legitimate to say about how consenting adults employ their genitals or women employ, or refuse to employ, their uteruses. If you can say only that you lack a positive belief about god, and that people who say they do have any sort of positive belief thereby gain no moral or scientific authority whatsoever, you're one of us.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "bad philosophy, egregious stupidity, phi..."
Send by mail Print  Save  Delicious 
Date: Tuesday, 01 Apr 2014 07:40
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "links"
Send by mail Print  Save  Delicious 
Date: Monday, 24 Mar 2014 04:33
Although John Gray's review of The Age of Nothing by Peter Watson and Culture and the Death of God by Terry Eagleton, "The ghost at the atheist feast: was Nietzsche right about religion?" begins with an egregious nonsequitur (described below), Gray really doesn't have much to say about modern atheism in this review. Gray first summarizes Watson's book, which Gray claims the book centers the modern debate on secular ethics around Nietzsche's charge that secular ethics, such as Bentham's and Mill's utilitarianism, relied on "theistic concepts and values," and must thereby be rejected. Subsequent ethical philosophy, according to Gray's view of Watson's book, consists in substantial part of answering Nietzsche's challenge. In contrast, Gray reads Eagleton as placing religion outside culture, taking the role of a force to restore a sense of "tragedy" to modern society. Although Gray rejects Eagleton's thesis, objecting that both Christianity and secular revolutionaries such as Lenin are the complete opposite of the ancient Greek notion of tragedy — "a conflict of values that cannot be revoked by any act of will" — he finds Eagleton's book profound in many ways, especially praising Eagleton's description of a "mythologised" Englightenment divorced from modern reality. While both books seem interesting, Gray's connection of these books to modern atheism seems strained and artificial.

As noted above, Gray begins with a nonsequitur: "There can be little doubt that Nietzsche is the most important figure in modern atheism, but you would never know it from reading the current crop of unbelievers, who rarely cite his arguments or even mention him." Perhaps Gray means here that Nietzsche should be the most important figure, but importance would seem to be defined by use; if Nietzsche is, as Gray asserts, widely ignored, then he is ipso facto unimportant. If Gray means something else by importance, however, the case must be made directly, not simply assumed. Furthermore, Gray cites Watson's argument that much of late 19th and early 20th century philosophy, politics, and culture forms a direct engagement with Nietzsche; to the extent that 21st century atheism has abandoned its engagement with Nietzsche, a more nuanced explanation than Gray's facile and dismissive denigration of modern atheists as "loud in their mawkish reverence for humanity, and stridently censorious of any criticism of liberal hopes." Although Gray simply fails to connect Watson's and Eagleton's books to modern atheist thought, Gray at least raises a point worthy of consideration.

As I have written many times before, modern atheism is primarily a social, cultural, and especially political movement. Our aim is to destroy the social privilege to claim any sort of moral authority on any "religious" basis. We oppose religious moral authority on methodological, not consequential, grounds (although obviously negative consequences do form an important critique); thus, we oppose religious moral authority even when that authority demands moral beliefs we find agreeable. Inexorably tied to this political stance is what Gray describes as the "Nietzschean imperative — the need to construct a system of values that does not rely on any form of transcendental belief." This imperative raises three specific challenges. First, is the Nietzschean imperative itself a transcendental belief? Second, does the pervasive liberalism of modern atheists rest on a transcendental belief? Finally, is the Nietzschean imperative untenable? Must morality itself rely on a transcendental basis? Finally, do modern atheists successfully address these challenges?

Even though Gray does not raise the first challenge, and I include it here only for completeness, it is relatively easy to address. First, even if the Nietzschean imperative were transcendental, it is not itself a moral belief; it is a meta-moral belief. In just the same sense, the* definition of science as "conclusions about objective reality logically drawn from observation and experiment" is itself not a scientific statement; it is a meta-scientific statement. It is not a statement about objective reality, it is a statement about how we choose to draw conclusions about reality. Second, the Nietzschean imperative is easily repaired by restating it as a project rather than an imperative: we want to construct a system of values without transcendence. Stated so, it simply becomes a descriptive statement about preferences, without any need to invoke transcendence. The self-referential challenge is therefore not a compelling challenge to the atheist project.

*I use the definite article not to imply that there is only one definition, but simply to refer to the specific definition offered.

The second challenge is more pointed. To a certain extent, it is not terribly relevant; the atheist propensity towards "liberalism" (which term, it must be noted, is extremely vague) may just be an artifact of the general propensity of the population to be liberal, with perhaps a bias against "illiberal" (also a vague term) people superficially denying some sort of transcendence. But denigrating a group because it has only a popular political agenda would seem to privilege only philosophers to have "legitimate" political opinions, which seems anti-democratic and in need of a more direct argument; furthermore, this criticism seems to be rarely applied to groups other than atheists. Atheists are political in the ordinary, prosaic sense that everyone is (or is expected to be) political in a (more-or-less) democratic republic. Big deal.

But Gray makes more direct assertions. First, atheists today "embody precisely the kind of pious freethinker that Nietzsche despised and mocked: loud in their mawkish reverence for humanity, and stridently censorious of any criticism of liberal hopes." It's difficult, however, to see this charge as anything but a gratuitous insult. I'm not a scholar of Nietzsche, but I've read enough to know that Nietzsche's aesthetic standards, while certainly refined, usually have considerably more subtlety. Nietzsche certainly criticizes sentimentality, i.e. misplaced emotion (as I recall, he uses the example of the young bourgeois woman shedding a tear over the plight of a theatrical heroine while her footman freezes while waiting for her outside the theater. But "mawkish" is not "sentimental," at least not in the above sense, and "censorious" simply means strongly critical; if we sincerely believe liberal virtues and hopes to be of value, why should we not be censorious? (And calling atheists strident has become such a banal cliche that I object not as an atheist or philosopher but as tutor of English composition.) Gray not only fails hit the mark in his philosophical critique of atheism; he has still missed the target entirely.

Gray's second criticism is at least relevant. He asserts that atheists must believe (if his invocation of Nietzsche's argument is relevant) that "the world can be made fully intelligible, [emphasis added]" presumably through the application of reason and observation, a belief that Nietzsche holds must be an "article of faith," and not "a premise of rational inquiry." Nietzsche (and Gray) might be correct: the hope for full intelligibility might require faith, but why should the liberal rationality, or any other secular ethical philosophy, require full intelligibility? The position of modern atheists neither requires nor asserts full intelligibility; atheists claim only that rationalism provides some intelligibility, and that whatever "intelligibility" religion might provide is trivial, specious, or insupportable. We do not assert that we have all the answers; we assert merely that religion does not have any good answers we do not already have. So although not entirely off target here, Gray again misses the mark and demolishes a straw man.

However ineptly handled, Gray does raise a point worthy of consideration. Nietzsche is a subtle guy, and I'm no scholar of his work, but I've read enough to have picked out one theme: to be a "god," in Nietzsche's metaphor, is to create moral truth. Adam and Eve (the mythological characters) become human when they know God's moral truth; modern human beings, jointly and severally, become "gods" when we reject God's authority to set moral truth and create our own. And this is the "terrible" truth of atheism: there is no God to constrain, however indirectly, our individual and social moral choices. The only external constraint (other than physical law) on an individual's moral choice is what other people will compel or forbid. And there is no external constraint on our social moral choices; an uncaring and indifferent universe will not compel or guide us to create a "good" society nor forbid or hinder us from creating a "bad" society. What and who we are, in a moral sense, is entirely in our hands.

To a certain extent, I suspect modern atheists take this truth, that we have become Nietzschean "gods," for granted. We argue for liberalism (or, as in my case, various radicalisms) not because we see these visions of society as externally mandated, but simply because we want, and many people around us say they want, such a society. Unlike Nietzsche, we simply accept the responsibility of creating our own society, our own morality; we are not existentially or psychologically crushed or awed, as Nietzsche perhaps was, by the weight of this responsibility. Given that we know (or perhaps just subconsciously take for granted) our social morality is a choice, liberalism is an easy choice: who wouldn't choose a society that promoted the dignity and well-being of everyone? (And even radicals such as me are fundamentally liberals; we do not disagree on ends, only means.) If society is what we choose, let us choose and make it so.

The realization that there are no external constraints on our moral choices, neither divine nor natural, destroys the twenty-five century old fundamental project of philosophy: to discover the external constraints on our moral choices; in essence to replace moral choice with moral truth. Without this project, philosophy becomes trivial: ontology becomes materialism (or physicalism, to the annoying particularists purists), epistemology becomes the scientific method, aesthetics becomes fashion, and politics becomes pragmatism. It takes no philosophy at all to say we simply do as we please, and without the need to justify assertions of moral truth, we need no metaphysical, unscientific ontology to define moral realism and we need no rococo, unscientific epistemology to support it. All we need to do is face the truth that who we are, as individuals, as societies, and as a race, is no more or less than what we choose.

Liberal religious believers must oppose atheism because we undermine their worldview as thoroughly as we undermine the illiberal religious worldview: it is just as unfounded to ground a liberal ethic as an illiberal in a God. Philosophers have to oppose atheism because we undermine their worldview as thoroughly as the religious: it is just as unfounded to ground a liberal worldview in philosophy as it is in God. There is no moral ground. Full stop. There is only choice, which is, by definition, no ground at all.

We atheists are terrified neither by the responsibility nor the license. It is we who must make our world, so we want to make it. We "can" — nothing in the objective reality outside our minds prevents us — do anything; we choose to be kind.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "ethics, philosophy, religion"
Send by mail Print  Save  Delicious 
Date: Wednesday, 19 Mar 2014 07:30
Persistent annoyance Occasional commenterMajor Nav pretty much illustrates the irritations I complain about in my recent post, On method:
From Major Nav:
You are fooling yourself to believe communism is a good thing or an achievable end.
To support your argument, you "study capitalism" to seek out examples of where you "believe" it is harmful while ignoring the harmful results of all previous attempts at communism. If anyone calls you on it, you just say "That was the old communism, I'm talking about neocommunism."
And you seek out authors and pick through their writings to select out of the context, an idea that is close to your concept and sprinkle them through your writings. Usually as a reference to the obscure article vs a direct quote. As if anyone else has read the article.

If all you have is a hammer, everything looks like a nail. Put down the hammer once in a while, take a step back and chose another viewpoint.

Let's break this down.

You are fooling yourself to believe communism is a good thing or an achievable end.

This is a "criticism" (actually just a complaint) about my conclusions, not my methodology.

To support your argument, you "study capitalism" . . .

This quotation is a blatant insult. I do not "study capitalism" with scare quotes; I actually do study capitalism, at an accredited university with a moderately prestigious economics department, I get excellent grades, and most every professor I have studied under or worked with — all committed capitalists — has offered to write me a letter of recommendation to any graduate school I wish to apply to. (And this ain't chopped liver: the reputation of an undergraduate program depends almost entirely on the performance of its students in graduate school; no professor will recommend a student he or she believes will fail.)

I'm not offended by the insult; the fact that ignorant tools like Major Nav have to depend on insult rather than reasoned argument shows the weakness, vacuity and dogmatism of their own position.

To support your argument, you "study capitalism" to seek out examples of where you "believe" it is harmful . . .

I have no idea why Major Nav puts "believe" in scare quotes; perhaps he is unfamiliar with the ordinary rules and meaning of English punctuation.

I don't need to study capitalism to discover examples of where it actually is harmful; I just need to read the newspaper. I study capitalism to discover why it is harmful, and where and why it is successful.

To support your argument, you "study capitalism" . . . while ignoring the harmful results of all previous attempts at communism. If anyone calls you on it, you just say "That was the old communism, I'm talking about neocommunism."

I understand that Major Nav is creating fictional dialog, but really: I have written (by a rough estimate) a half-million words on the blog, all searchable. Is it too much to ask that Major Nav actually quote me?

And I'm unsure of precisely what Major Nav is accusing me of here? Do I ignore the bad effects of communism, or do I recognize them, try to identify the causes, and change my ideology to account for that recognition? Is Major Nav trying to simultaneously accuse me of both dogmatism and rigidity on one hand and opportunism and excessive flexibility on the other?

And you seek out authors and pick through their writings to select out of the context, an idea that is close to your concept and sprinkle them through your writings. Usually as a reference to the obscure article vs a direct quote. As if anyone else has read the article.

What!? I engage with the scholarly literature, and cite and link to my sources? How rude! Can you get any more intellectually dishonest? I hang my head in shame.

If all you have is a hammer, everything looks like a nail. Put down the hammer once in a while, take a step back and chose another viewpoint.

This metaphor makes no sense. What does the "hammer" represent? A hammer is a tool, not a conclusion, and certainly not a viewpoint.

Let me reiterate:

I write about controversial topics. I already know that people disagree with me. I am absolutely uninterested that you personally disagree with me. I'm not particularly interested that people I already know and respect disagree with me or that people with impressive credentials disagree with me; if you're an anonymous, uncredentialed commenter, I care even less.

On the other hand, I'm very interested in specifically where and how you think I'm fooling myself. But remember, fooling myself is a methodological criticism. It is not only useless but also the epitome of dogmatic obtusity to assert, as does Major Nav, that I must be fooling myself just because I have come to a conclusion you disagree with.

Like any social person, I get irritated when people insult me. But being insulted has absolutely no effect on what I believe or understand; I have never changed my mind because someone insulted me, no matter how well I know the person or how highly I value their good opinion. If your intention is to gratuitously irritate me, go ahead and insult me. You'll get one (maybe two) shots, and then I will, like any rational person, simply refuse to engage with you.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "about the blog"
Send by mail Print  Save  Delicious 
On method   New window
Date: Tuesday, 18 Mar 2014 05:41
Recently, a couple of comments (here and here) prompt me to talk about method.

I take Feynman very seriously: "The first principle [of scientific integrity] is that you must not fool yourself--and you are the easiest person to fool. So you have to be very careful about that. After you've not fooled yourself, it's easy not to fool other scientists. You just have to be honest in a conventional way after that." I try, indeed I try very hard, to apply this principle in my criticism both of others and of myself.

Feynman's assertion raises two questions. First, what does it mean to fool oneself? Second, can we distinguish between fooling oneself and not fooling oneself, and if so, how?

Feynman makes it clear that fooling oneself is a matter of method. In his view, when someone distorts an argument to support a preconceived idea, he is fooling himself. Note that Feynman does not say that having or investigating a preconceived idea is by itself fooling oneself; fooling occurs when the investigator allows his preconceived idea to distort the argument. So what does it mean to "distort" an argument? Feynman uses subsequent investigations of Millikan's electron charge experiments as an example: the investigators distorted the data to get values closer to Millikan's answer, i.e. their preconceived idea of what the answer "should" be.

If fooling oneself is a matter of method, then it should be possible to tell the difference between fooling oneself and not fooling oneself by looking at the method. Science has a lot of interesting procedures to avoid fooling oneself. The most obvious is the double blind method: neither the subject nor the person collecting the data knows whether the person is being treated or is acting as a control. Since neither the subject nor the investigator knows what the answer "should" be (the preconceived answer is that there is a difference between the measurements of treated and untreated (control) subjects), they cannot bias the measurements. The double blind method seems like an effective technique for removing one kind of fooling oneself.

It is easier to fool oneself in philosophy and non-experimental argumentation, simply because there isn't an easy way like the double-blind technique to remove bias. But there are other ways. Does the author employ or avoid well-understood logical fallacies? (In my more cynical moods, I suspect that the primary project of academic philosophy is to teach students to write so turgidly as to prevent the detection of fallacies.) Does the author critically examine opposing viewpoints? Does she try to represent those opposing viewpoints fairly and honestly? These methods are not as tight and effective as double-blind testing, but they can, in my opinion, do a lot of work detecting and correcting fooling oneself.

Which brings me to the "criticism" of my own work cited above. First, I am always looking to see if and how other thinkers, especially thinkers with whom I disagree, fool themselves or avoid fooling themselves. Which means I am looking not at their conclusions, but at their methods. For example, when I examine Plantinga's Modal Argument, I look not at his conclusion that God exists, but at his method: I argue that he has made a methodological error, a logical fallacy. The question is not whether I am "motivated" to investigate his argument because I disagree with his conclusion, but rather whether my motivation has distorted my criticism, distorted my own argument.

A lot of the criticism of my work in the comments tends to come in three categories. First, people who just insult me. I have enough self esteem that insults from random people on the internet do not cause me the least distress. The only negative effect of this sort of criticism is irritation that I have to waste my time reading and possibly moderating a useless comment. Second, people who just assert their disagreement with my conclusion, which is again a pointless waste of my time (and if the commenter intends to affect my views in any way, a waste of his or her time). I really don't care that people disagree with me. It would be a waste of my time to write about what everyone agrees with; indeed, the more controversial a topic, the more interesting.

What I really do want to know is: am I fooling myself? If you do not believe I care whether or not I'm fooling myself, why even bother to comment? I honestly don't understand it. No one but me reads the comments, especially in older threads. Why even bother to register your dissent? If I want to fool myself, or I don't care about fooling myself and others, and registering dissent actually mattered, I would just delete the comments, which I can do without detection. (And you don't know I haven't, eh? But I leave them all in, unless they simply repeat an earlier point and I want to make clear that I am unwilling to waste any more of my time reading and moderating a commenter.)

I will give nontrivial attention only to criticism that addresses my method, i.e. how I construct my argument. Am I making a logical fallacy? Have I failed to address an important opposing viewpoint or argument? Do I make unexamined or unsupported assumptions? (Note too that this is a blog, not an academic journal; many posts here represent preliminary speculation, not fully-formed arguments.)

By the way, it's really important to cite and summarize opposing views and offer a basic analysis of how the opposing view affects my own argument. If you don't cite, I have no idea what you're talking about; even if I Google a vague referece, for example Edward Feser, I don't know whether what I find is what the commenter is referring to. If you don't summarize or analyze, I have to substitute my own judgment of the opposing viewpoint for my critic's, which just introduces bias. I have a limited amount of time, and if the work of someone such as Feser superficially appears to be worthless (and his work does superficially appear to be worthless), I'm not going to waste my time without some evidence that a deeper analysis is worthwhile.

I'm open to criticism; I really do believe that I can possibly be fooling myself, that I can possibly be ignorant of really good arguments that contradict my position (which is why, for example, as a communist I study capitalist economics). I don't even mind rudeness per se, but if you are rude, especially without direct provocation, your support should probably be stronger than otherwise. But if you're not interested in helping me figure out how I'm fooling myself, don't bother commenting. I take Heinlein to heart: "Never wrestle with a pig. You both get dirty and the pig likes it."
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "about the blog"
Send by mail Print  Save  Delicious 
Date: Tuesday, 18 Mar 2014 04:32
Our diagram from last time is oversimplified; it's missing government, a financial sector, foreign trade (i.e. it models a closed economy and any kind of stock of capital/inventory or money, but it's useful to illustrate a few concepts. The first is national accounting. Our diagram includes all the households and firms in a "nation"; as macroeconomists, we track the flow of money, i.e. how often money crosses the boundary between firms and households. We total up the flows at the end of the year, and those are our national accounts for that year.

But first, a few simplifications: accounting for housing and land rent is really weird; furthermore, economists don't consider land rent to be that economically interesting, since we can't create any more land. Therefore, we usually just ignore land and rent, and focus on labor and capital. Since we ignore land rent, and we're lazy, we often use the letter L for Labor. The "rent" that people pay their landlords is consumption of goods and services, i.e. the physical building, which has to be actually built, and which wears out over time. Similarly, building a house is investment, i.e. production of capital. However, our current model does not have any notion of a stock of capital, so we're just going to ignore housing completely for now, until we improve our model.

Transactions that count as macro flow include:

  1. Alice's household buys $100 of food from Zelda's Groceries (C = $PQ)
  2. Bob's household receives a $100 paycheck from Yarrow's Electronics (FL = $wL)
  3. Carol's household buys a new printing press for $1000 (I = $PQ)
  4. The Daily Press pays $100 rent to Carol's household to use her printing press (FK = $rK)

The equations in parentheses indicate how we account for each transaction. For 1 and 3, on the left, we have C and I: Consumption spending and Investment spending. These refer to spending on the bottom arrow of the diagram. For 2 and 4, we have FL and FK, compensation for the Factors of Production, i.e. Labor and Capital

Transactions that don't count as macro flow include:

  • Zelda's Groceries buys $100 of carrots from Andy's Farm; we'll count this income when they sell the carrots to consumers
  • Betty's household buys a $100 used car from Yarrow; no new production has occurred; we're just shuffling assets
  • Carl's household borrows $10 from Dana's household; again, we're just shuffling assets around

One interesting thing to note is that on average, the money paid to the factors of production should exactly equal the money spent on consumption and investment. Therefore, we can measure the national economy just by looking at one side or the other. Traditionally, economists look at the household spending side, because that's easier to measure. Thus, we say the nominal national income (Y) equals consumption (C) plus investment (I): Y = C + I.

This equation shows nominal income, i.e. all the variables are denominated in money; economists typically use capital letters to denote nominal values. We're also interested in real values: how much actual stuff is being produced and consumed? If we produced the exact same amount of stuff, but all the prices of stuff doubled, Y (and C and I) would double, but we wouldn't be materially better off. To handle this situation, we look at the price level (P), which is just a weighted average of the prices of individual goods and services. Therefore, real income (y) = nominal income (Y) divided by the price level (P): y = Y/P.

Another interesting thing to note is that a coin, a given physical unit of money, will be spent multiple times across the household/firm boundary. In our model, every household and firm spends all its money every sub-period, so if our sub-period is a week, and all households spend their money on Monday (go to the market), and all firms spend their money on Friday (and everyone stays home on Saturday and Sunday), then every coin will cross the household/firm boundary once a week on both arrows. The number of times, on average, a coin crosses the household/firm boundary on the income side (remember, the income side is equal to the factors side), is called the velocity of money (V). Therefore, if we have a given physical quantity of money (M), then the nominal income (Y) equals the total amount of money (M) times the velocity. Because we like to look at real income (Y = pY), we throw in the price level (P) to get the fundamental accounting identity, M*V = P*y.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "Introduction to Macroeconomics"
Send by mail Print  Save  Delicious 
Date: Monday, 17 Feb 2014 06:53
Thomas Wells Is not a [New] Atheist. So what? I go to all the meetings, and I don't recall that we asked him to join. Wells lists a number of complaints against the New Atheists, most of which are specious or have little or nothing to do with our project. Protip: If you're going to criticize a group for holding incorrect views, it is helpful to actually cite the actual expression of those incorrect views. Just looking at the article, a reader has no idea whether Wells is accurately representing our views or simply pulling straw men out of his ass. And I don't know what to make of someone who publicly expresses his view that publicly expressing our views is somehow disreputable. If religion is not "worthy of rational dissent," and if New Atheists are, as he says, religious, why does he (try to) rationally dissent from our views?

Rather than fisk this terrible post, which verges on libel, let me offer the perspective of a self-identified New Atheist, one who has been a part of the atheist and New Atheist movements since the beginning of the millenium.

First, atheism is not an organization. There are many atheist, humanist, non- and anti-theist organizations, but there is no overarching organization that in any way controls the message or the membership. The only conclusion you can draw about someone who calls him- or herself an atheist is that he or she:
  1. Probably does not believe that any god(s) really exist
  2. Chooses to call him- or herself an atheist
That's really it. Other than that all atheists will probably reject as invalid any statement of the form "God exists, therefore ...," there are no other valid generalizations one can actually make about people who call themselves atheists. And if you don't want to call yourself an atheist, don't. That doesn't offend us in the least.

The New Atheists are only slightly more restrictive. To be a New Atheist, you must:
  1. Call yourself an atheist as above
  2. Publicly criticize religion or endorse the public criticism of religion
  3. Choose to call yourself a New Atheist
Again, no organization, no formal membership, no party line. You do not have to even like the "four horsemen" — Dawkins, Dennett, Harris, Hitchens — nor the host of other prominent writers, e.g. Stengler, Myers, Coyne, Cline, Marcott, Namazie, to be a New Atheist. (Indeed, a lot of us find Harris shallow, Hitchens glib and sexist, and their views objectionably justifying or endorsing neocolonialism.) So again, the only conclusions that one can draw about all New Atheists is that we are willing to criticize the social, cultural, and legal role — outrageous privilege, really — of organizations, institutions, and ideologies that are uncontroversially religious, such as the Catholic church.

And again, if you do not want to publicly criticize religion, nor actively endorse the public criticism of religion, then don't. We will yet again take no offense; we do not insist that everyone join us. We ask only that if you agree with our project, stand out of our way.

As far as I know, no New Atheist supports "scientism," in any reasonable sense, including Pigliucci's*:
a totalizing attitude that regards science as the ultimate standard and arbiter of all interesting questions; or alternatively that seeks to expand the very definition and scope of science to encompass all aspects of human knowledge and understanding. (144)
What we do is fundamentally different: we see that many religious people make claims about the real world, and we apply a tool, scientific thinking, broadly conceived, to criticize those claims. We do not say that scientific thinking is the only tool to criticize religion (Only two of the "four horsemen," Harris and Dawkins are scientists; Dennett is a professional philosopher and Hitchens was a journalist.) We do not say that say that scientific thinking is the only way to criticize religion. We do, however, say that scientific thinking is one very effective way to criticize religion. The proof is in the pudding: I personally know many formerly religious people who became irreligious precisely because they saw a irreconcilable conflict between religion and scientific thinking.

*Pigliucci's definition of scientism seems eminently reasonable; it is his charge that New Atheists actually embrace this definition of scientism that is unreasonable.

The philosophical idea that science is the only possible form of knowledge depends on the definition of "knowledge." If knowledge is defined as anything we can come to common agreement about, then science is obviously not the only form of knowledge; we do not need to make a single observation to all agree that according to Peano Arithmetic or Set Theory, two plus two equals four. If we define knowledge as true statements about the real world, scientific thinking, broadly defined, seems the only way at present to gain knowledge. But that's a practical observation, not a philosophical position. Science works to understand the real world, and we use it; if you find something else that actually works, by all means, send me the link.

Although I reject scientism, I must yes, we New Atheists talk about God a lot, and we have negative beliefs about God. That's part and parcel of criticizing religion. In much the same sense, Democrats talk a lot about Republicans (and vice-versa), and communists talk a lot about capitalism (and vice-versa), and every good persuasive college essay addresses opposing views.

But yes, like almost every non-trivial statement, the New Atheists seek to persuade. If persuasion is itself objectionable, why pick on us? and why seek to persuade us to shut up? Persuasion is a completely normal human activity, and is not intrinsically religious.

We have a project: to erase the social, cultural, ethical, and legal privileges of individuals, organizations, institutions, and ideologies claim because they claim private knowledge about what God wants. If you do not believe that such institutions etc. actually exist, you are a fool. If you are uninterested in whether or not they should have privilege, then you're free to ignore both religion and New Atheism, just as I am free to ignore, and choose to ignore, literary criticism of Medieval French poetry. If you oppose religious institutions but think some New Atheists are doing it "wrong," then go out and do it "right" as you see fit (and please don't have the naked hypocrisy of accusing us of dogmatism for failing to follow your party line). If you think some specific criticism of religion is mistaken, then by all means cite, quote, and summarize the original work, and send me the link: if you're right, I'll change my views; if you're wrong, you should welcome the correction, n'est pas?

But if you want to criticize the New Atheists for positions we manifestly do not hold, you are lying or negligently repeating a lie. And if you want to criticize the New Atheist — and only the New Atheists — for speaking publicly with the intention of persuasion, without criticizing literally everyone else on the planet who has, you know, an actual opinion, then you are trying to defend religion. And you are defending religion dishonestly: if you want to defend religion, then defend it directly.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "egregious stupidity, religion"
Send by mail Print  Save  Delicious 
Date: Sunday, 16 Feb 2014 14:54
In New Atheism and the Scientistic Turn in the Atheism Movement, philosopher Massimo Pigliucci raises (but fails to dispose of, or even really argue) an interesting question: How far can we stretch the semantic boundaries of the word "science" without doing violence to the plain meaning of the word or rendering the word so broad as to be effectively meaningless? Pigliucci defines "scientism" as
a totalizing attitude that regards science as the ultimate standard and arbiter of all interesting questions; or alternatively that seeks to expand the very definition and scope of science to encompass all aspects of human knowledge and understanding. (144)
The first part of the definition, "totalizing attitude," seems vacuous and disconnected from reality (and Pigliucci spends zero efforts, but the second part, expanding the definition "all aspects of human knowledge and understanding," is a bit more interesting. Pigliucci's makes his strongest argument when he cites Sam Harris's book, The Moral Landscape: How Science Can Determine Human Values. According to Pigliucci, Harris wants to apply the word "science" to any rational, empirical inquiry into facts. Pigliucci cites Harris's own words, which are actually somewhat milder: Harris does not want to draw a "hard distinction between 'science' and other intellectual contexts in which we discuss "facts.'" Pigliucci argues that "facts" are too varied to even, as Harris does, place science on a continuum with other inquiries into facts (149). According to Pigliucci, the best definition of science is what professional scientists do: a collection of activities having common threads, including systematic observations and experiments, hypothesis testing, general theories about the world, peer review, and public and private funding (151). Pigliucci argues that if we "expand the definition of science to pretty much encompassing anything that deals with 'facts,' loosely conceived . . . the concept of science loses meaning" (151). Pigliucci argues throughout that using science to address philosophical inquiry — e.g. inquiry into religion and moral inquiry — is wholly inappropriate; philosophical inquiry is itself a rational inquiry into the facts, but it is so dissimilar to science that they are completely distinct categories. To conflate the two categories harms both philosophy and science. Notwithstanding Pigliucci's failure to cite anyone actually using his extreme broadening of "science" to mean anything having to do with facts (making this assertion a trivial straw man), his argument fails because he equivocates the word "fact," he has a too-narrow definition of science, and he commits the fallacy of the excluded middle. Repairing these errors allows a legitimate broadening of the word science, one which can address the issues that Pigliucci believes are beyond the boundaries of science, even broadly conceived.

(I do not address the thesis that Pigliacci actually argues in his paper, which seems to catalog a series of philosophical and scientific errors made by prominent New Atheists. I am instead trying to address a thesis that Pigliucci argues tangentially and perhaps only implicitly.

Although Pigliucci takes a proper philosophical attitude towards the word "science," in that he argues for a definite connotation regardless of usage, he takes a lexicographer's attitude toward the word "fact." Pigliucci argues that the word "fact" connotes "too heterogeneous a category" for science to encompass. Pigliucci asserts a broad definition of "facts," which includes all statements that one cannot successfully deny; Pigliucci asserts, for example, that one cannot, for example, deny that the sum of the interior angles of a triangle (on a plane) add up to 180° (150). But this argument can be read as simply the tendency of speakers of natural languages to apply the same word to different categories. Pigliucci's example is telling: Euclidean geometry is not a fact even in the loosest empirical sense of a fact as a true statement about the world. Instead, Euclidean geometry is a mathematical formalism; to determine whether or not Euclidean geometry accurately describes the real world, we need to actually observe and measure angles. And we find that often, Euclidean geometry does not accurate describe the world, as when we draw triangles on a sphere or the Reimann surfaces near a large mass. We can take the amorphous mass of meanings that constitute the lexicographical content of "fact" and easily divide them into distinct* categories: common observation, deductive certainty, settled scientific theories, social totems, and confident assertions. There is no need to hold that broadening the definition of "science" requires that the broader definition include every lexicographical denotation of "fact."

*more-or-less distinct, letting ordinary people use the settled cases and letting philosophers investigate the edge cases.

Unlike "fact," Pigliucci tries to construct a philosophical definition of science, but his definition which he introduces without argument) is simultaneously too broad and too narrow. It is too broad in the sense that by noting primarily the sociological and institutional character of science, Pigliucci fails to distinguish between institutional science and pseudoscience. Whether or not he was successful, Karl Popper tackles this distinction in The Logic of Scientific Discovery and Conjectures and Refutations. Popper tries to differentiate endeavors that look legitimately scientific, e.g. physics, astronomy, evolutionary biology, from endeavors that that look pseudoscientific, e.g. psychoanalysis, astrology, Marxist history, even though all of these endeavors share (or could easily share) not only all of the institutional characteristics that Pigliucci lists, but also reliance on observation and experiment. In Popper's view, one indispensable distinction is that real science is falsifiable, pseudoscience is not; this distinction is not institutional but philosophical, not a matter of somehow testing hypotheses, but a specific method of testing hypotheses.

Pigliucci's definition is too narrow in that we can easily conceive of science being done without many of the institutional characteristics he lists. How general must a theory be to be "scientific"? Is, for example, forensic science really a science? Forensic science seeks to discover what actually happened at a particular point in time, almost the exact opposite of the construction of a general theory about the world. If forensic science is not a science, what is it? Do we need systematic peer review — in something other than the trivial, over-broad sense that all communication is received and modified by listeners — for an endeavor to be scientific? Must we have public or private funding, again in other than the trivial sense that everything is in some sense economic? For decades, science was self-financed, pursued by people with their own income from other sources. Pigliucci's definition of "science" is as absurd as defining "dining" as something being done in a restaurant using food, which would include eating at McDonalds and exclude my friend, who is an excellent amateur cook, preparing dinner at home.

Finally, Pigliucci's entire objection to scientism is an exercise in the fallacy of the excluded middle. In Pigliucci's view, the only alternative to his rigid, narrow (and absurd) definition of "science" is an anything-goes descent into linguistic anarchy. It is astonishing, and should be incomprehensible, that a professional philosopher would commit such an elementary fallacy. When it suits him, Pigliucci perfectly comfortable with words having broad, heterogeneous meanings; he does not, for example, see his broad definition of "fact" to render the word meaningless. Even the over-broad, uncited definition of "science" he objects to — anything having to do with "facts" — is still meaningful, so long as "facts" is meaningful. We can have words with broad meanings that are still meaningful; Pigliucci, however, sees no middle ground — at least regarding "science" if not "facts" — where there manifestly can be, and probably is.

Pigliucci's article does point us towards a more useful definition of "science," rejecting the idea that there is an "objectively" correct definition of science about which everyone in the world might be mistaken; there are only more or less useful definitions. To do so, we reject the institutional components of Pigliucci's definition as accidental, required for performing only certain kinds of scientific investigations in a particular social and cultural context. Instead, we retain the philosophical components. First, we can talk scientifically only about the real world*. Second, we construct theories**, collections of logically connected statements, about the real world. Third, these theories must be falsifiable: it must be logically possible that there are determinable facts that would disprove a theory. Fourth, we accept common observation as the factual basis for attempting to falsify a theory; for a theory to be falsifiable by observation, the theory must logically entail that some observation is impossible. Finally, we invoke parsimony: we discard any part of a theory that does not change the entailed observations as unnecessary. Like any other definition in natural language, this definition still leaves edge cases (is history a science?) but it seems to carve out a core of unambiguously scientific discourse and unambiguously unscientific and pseudo-scientific discourse. The question is: is it useful?

*thus eliminating talk about Middle Earth or Erehwon as scientific. We can, of course, talk scientifically about the texts and authors of The Lord of the Rings and Erehwon, which are real.

**both general and specific theories, which allows us to talk scientifically both about the law of gravity, a general theory, as well as specific theories such as whether O.J. Simpson murdered Nicole Simpson and Ron Goldman, and why Supernova 1987A has a visible ring.

This definition seems to exclude a lot of religious thought as either unscientific or scientifically false. In The God Delusion, Richard Dawkins proposes the "God Hypothesis." Dawkins asks: what happens when we try to construct religious thought as science, broadly conceived? Applying the criteria, we hypothesize that God is real, with real properties. Second, we make a logically connected theory that includes God and His properties. Third, we make this theory falsifiable, it entails logically possible facts which would disprove the theory. Fourth, we demand commonly observable facts that would disprove the theory. If we do so, then we find that either a real God has properties that are entirely different than the properties we normally ascribe to persons; a theory of God compatible with the commonly observable facts requires a God who is, unlike ordinary human persons, not only mechanical and sphexish. Reject any of the criteria, and you concede the argument by contradiction, absurdity, or vacuity. If God is not real, you're already an atheist. If you cannot make a logically connected theory, you are just babbling. If your theory cannot be falsified, then there's no way of telling if it's true or false. If your theory is not falsifiable by commonly observable facts, you are unjustifiably claiming private knowledge. And if your theory is observationally identical to a universe with no personal God, then you're again already an atheist; a God who makes no difference is no God at all. The only remaining question is whether some people would find this analysis useful, and I know many people who, applying this analysis, have abandoned their religion. We also have any number of authors, notably Josh McDowell, author of Evidence that Demands a Verdict, who at least attempt to make a scientific case, in the broad sense noted here, for the existence of God.

Does this definition include or exclude anything obviously objectionable? We seem to admit lawyering, but lawyers are not obviously unscientific. This definition excludes pure mathematics (even if a lot of mathematicians are Platonists), but I suspect most mathematicians would not object to being placed outside the boundaries of science. This definition definitely excludes philosophy; I do not know, however, whether Pigliucci would be encouraged or enraged by such exclusion.

Finally, the question remains: does this definition of science "encompass all aspects of human knowledge and understanding"? It certainly does not encompass all aspects of human understanding (even if the definition of "understanding" is so broad as to render the term meaningless). As noted above, it does not include mathematics, literature, or even philosophy, which are uncontroversially parts of human understanding. Perhaps, however, it does encompass all knowledge; it is perhaps the case that anything that legitimately deserves the name "knowledge" really must be scientific, in the sense described above. But I need not answer this question to dispose of Pigliucci's case; it is enough to find that this broad definition of science is useful and largely unproblematic.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "atheist politics, bad philosophy, philos..."
Send by mail Print  Save  Delicious 
Date: Sunday, 16 Feb 2014 07:47
Before capitalism, the dominant relation of production was the feudal system.

In a feudal system, households [shown above as circles with H] are largely self-sufficient. They grow and eat their own food, manufacture and wear their own clothing, build, maintain and occupy their own houses and barns, etc. There's some specialization — carpenters, blacksmiths, tinkers, doctors, priests, etc. — some inter-household trade, and some international trade, but by and large small groups of people, households and villages, are economically self-sufficient, and households produce surpluses of food.

Any time you have surpluses (wealth), you have mean people with weapons trying to take that surplus by force. The feudal nobility existed (or so they claimed) to protect households from bad guys who wanted to come and steal their stuff. To protect households, you need men with heavy armor and weapons, and big strong horses. Knights owned land, and collected part of the surplus — rent — to feed people who would create and maintain the armor, and feed the horses. You also need someone to coordinate all these knights and command them in battle. So you have feudal lords (barons, dukes, etc.) and all their advisors, teachers, servants, etc., all of whom need to be fed, so the feudal lords also collected part of the surplus.

The most important (from an economist's perspective) is that absent international trade, a feudal economy doesn't need money. During the feudal period, landowners who collected rent in money did poorly; those who collected rent in kind — specific amounts of grain, meat, wool, etc. — did much better. But just looking at the diagram, everything flows one way; there are no complex interactions to manage. You just send a wagon to each household, demand a load of grain and some sheep and cows, and drive the wagons to the knight's house. Easy.

Self sufficiency is nice, but it's economically limiting. We can produce more (than an agricultural economy) if people specialize. More importantly, the invention of steam engine technology in the late 18th century (James Watt invents the first practical steam engine in 1781) made specialization and mass production economically possible. By the middle of the 19th century, we had a new relation of production: market capitalism.

(The transition from feudalism to industrial capitalism takes about 200 years, with the intermediate stage of mercantilism. Sadly, I don't have the time to delve into the interesting, complicated details.)

Instead of self-sufficient households, we have specialized firms (businesses). Firms obtain land (L), labor (N), and capital (K), from households, each firm produces a specialized good or service, and all these goods and services are distributed to households.

This arrangement produces a problem: the problem of distribution. How do we know how much land, labor, and capital to distribute to each firm? How do we know how many goods and services to distribute to each household? And remember, no one is really "planning" how to change the relations of production; all of this stuff is evolving from individuals making individual decisions. What emerges from all of these individuals is the market system, which uses money to allocate land, labor, and capital to firms, and goods and services to households.

In this model, some households supply labor (N) to the market and receive money wages (\$w), other households supply capital (K), i.e. machinery, tools, buildings, etc. to the market and receive rent (\$r), and other households (not shown) supply land (L) and receive rent (\$t). Households then obtain goods and services (G&S;) from the market, and pay a price (\$P) times the quantity of goods and services they obtain (Q). Firms obtain land, labor, and capital (L, N, K) from the market and pay wages and rent (\$w & r). They use those resources to create goods and services, which they supply to the market at a specific price and quantity. In this model, the real economy and the monetary economy flow in opposite directions. You have to have money, because the market is a feedback system, and a feedback system requires something that is fed back. When we add money to our earlier diagram, we get the basic circular flow diagram:

To sum up: Feudalism is a (mostly) one-way system: households grow food, eat some, and send the surplus up the hierarchy, where it is eventually eaten. Capitalism is a circulating system: land, labor, and capital circulate to firms, and goods and services circulate to households; in the opposite direction, money circulates to households in the form of wages and rent, and then circulates to firms in the form of purchases. Understanding this circular flow is the essence of macroeconomics.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "economics, Introduction to Macroeconomic..."
Send by mail Print  Save  Delicious 
Date: Saturday, 15 Feb 2014 04:20
In Is Atheism Irrational (mentioned by 3quarksdaily who appear to have a real beef against atheism), Gary Gutting interviews Christian philosopher Alvin Plantinga on the philosophical status of atheism. Plantinga's arguments for the irrationality of atheism, which he defines as "the belief that there is no such person as the God of the theistic religions." Plantinga's critiques of atheism, however, are at best weak and sometimes fallacious.

Plantinga first asserts that most philosophers, even believers, reject the traditional theistic arguments for the existence of God as unsound. Plantinga, however, claims that merely the failure of these arguments does not provide sufficient grounds for atheism. Gutting mentions Plantinga's argument that the question of theism is similar to the question as to whether there are an even or odd number of stars in the universe. Plantinga believes this similarity is better than Russell's Teapot because we have "a great deal of evidence against teapotism," albeit indirect, whereas we have no evidence for or against the evenness of the number of stars. Hence, the proper response is agnosticism rather than atheism. However, Plantinga fails on a number of counts here.

First, he misrepresents Russell's argument. Russell adds to his argument, "But if I were to go on to say that, since my assertion cannot be disproved [by direct observation], it is intolerable presumption on the part of human reason to doubt it, I should rightly be thought to be talking nonsense." Russell is not talking about an abstract question of epistemology, but addressing the assertion of social and cultural privilege on the basis of unfalsifiable propositions.

Second, the even-number-of-stars (ENS) analogy fails to capture the scope and character of religious belief. ENS is a very simple, prosaic, even trivial proposition that has no implications at all on the character of the universe; there is no indirect evidence we could use to adduce any probability other than 50%. Religion is not so simple nor prosaic. Does Plantinga really want to claim that religion is as simple, prosaic, and trivial as ENS, without any conceivable indirect consequences?

Finally, the strict distinction Plantinga makes between agnosticism and atheism is a straw man. As Russell elaborates, "I ought to call myself an agnostic; but, for all practical purposes, I am an atheist. I do not think the existence of the Christian God any more probable than the existence of the Gods of Olympus or Valhalla." Russell, like many other atheists, consider atheism not a philosophical position but a practical position. Atheism need only be philosophical agnosticism applied to positive claims made without evidence.

Plantinga goes on to say,
I don’t think arguments are needed for rational belief in God. In this regard belief in God is like belief in other minds, or belief in the past. Belief in God is grounded in experience, or in the sensus divinitatis, John Calvin’s term for an inborn inclination to form beliefs about God in a wide variety of circumstances.
Let's try to take this statement apart. Plantinga sharply distinguishes arguments on the one hand, and experience or sensation on the other hand. According to Plantinga, belief in God rests on the same foundation as "belief in other minds, or belief in the past," none of which require argument to believe.

Plantinga seems to be using "argument" in more restricted sense than the common definition as "a connected series of statements intended to establish a proposition." We can easily create arguments based on experience; science is nothing but arguments from experience. Plantinga seems also to distinguish direct sensory experience ("sensus divinitatis") from "experience," which is presumably indirect. But drawing an indirect conclusion from experience requires argument. Belief in other minds and belief in the past both require argument, based on experience, to rationally establish. Even establishing the existence of a sensus divinitatis (direct sensory apprehension of God) requires an argument to establish that it is indeed a sense rather than a purely internal mind (brain) phenomenon. Plantinga is just speaking nonsense here, presumably to insulate his position against any kind of argument.

Although Plantinga denies that arguments are unnecessary for belief in God, he offers a few anyway. (Why? I don't need to argue that we can all see the tree in my back yard; if you are in doubt, come and look.) He starts with the Fine Tuning argument: "[T]he universe seems to be fine-tuned, not just for life, but for intelligent life. This fine-tuning is vastly more likely given theism than given atheism." Plantinga does not seem to understand that at best, the Fine Tuning argument is controversial, and probably proves the opposite: given fine tuning, atheism (naturalism) is vastly more probable than theism, because a God could create intelligent life given any physics, whereas life can exist naturally only in a fine-tuned universe.

Gutting simply accepts the strength of the Fine Tuning argument, but implies that a fine-tuning God "fall[s] far short of . . . an all-perfect God." In response, Plantinga defends Christianity with an obvious fallacy of the excluded middle: "[Y]our qauestion makes sense only if the best possible worlds contain no sin and suffering." The grammatical error aside ("best" is a superlative; there can be only one best possible world), the argument against an all-perfect God requires us only to say that this exact real world is not the best; while it's possible arguendo that the best possible world contains some sin and suffering, it could contain less sin and suffering than this one. An all-perfect God would, at best, create only the best possible world, even if He would create any imperfect world at all. If this is not the best possible world, an all-perfect God is either does not exist or the concept is vacuous.

Plantinga has an odd notion of what a good world is:
he first being of the universe, perfect in goodness, power and knowledge, creates free creatures. These free creatures turn their backs on him, rebel against him and get involved in sin and evil. Rather than treat them as some ancient potentate might — e.g., having them boiled in oil — God responds by sending his son into the world to suffer and die so that human beings might once more be in a right relationship to God. God himself undergoes the enormous suffering involved in seeing his son mocked, ridiculed, beaten and crucified. And all this for the sake of these sinful creatures.

I’d say a world in which this story is true would be a truly magnificent possible world. It would be so good that no world could be appreciably better.
Really. If this is the best possible world, I don't understand anything at all about good.

It's hard to see Plantinga as saying anything but that a world with the Holocaust, millennia of war, human-induced famine, genocide, rape, murder, torture is not only good but best; if those horrors had not happened, the world would have been worse, because God would not have suffered. At least Plantinga is clear: this is the sort of world that Christianity offers. I want none of it.

Gutting notes that atheists justify their disbelief by the assertion that God does not have explanatory power. Plantinga deflects this challenge by noting that lack of explanatory power does not justify disbelief. Plantinga argues that we are not justified in disbelieving in the existence of the Moon, for example, merely because it is no longer considered a good explanation for lunacy. But Plantinga seems to ignore that the existence of the Moon still retains considerable power to explain why we see a big round object in the sky that seems to correlate with the tides.

Plantinga then goes argues two ways in which God does have explanatory power. The first is religious experience. This argument would have considerably more force if religious believers all believed in the same God, rather than gods that always resemble their own cultures and justify their idiosyncratic social norms and political power structures.

Plantinga repeats Thomas Nagel's ad hominem speculation that atheists disbelieve in God because we don't want there to be a God. Plantinga asserts that theism poses a "serious limitation of autonomy." Plantinga is completely contradicting himself here; above, he argues that this is the best of all possible worlds because we have the maximum amount of autonomy, sufficient autonomy to inflict the worst sort of harm, suffering and pain on other human beings. Precisely what limitations on autonomy does theism impose? It is true, however, that atheists object to the limitations on our autonomy imposed by human beings who claim to be speaking for God, but I cannot imagine objecting to any limitations on my autonomy that a truly just and loving God would impose, any more than I object to the (rational and justifiable) limitations on my autonomy imposed by the state.

Gutting asserts (without mentioning, much less citing, any source) that materialism is a "primary motive" for atheism. I have no idea where he gets this from; having read a considerable amount of atheist philosophy, I have never seen this argument; it cannot possibly be "primary." I don't even know what Gutting means by "materialism"; in the 19th 19th century idea that nothing exists but atoms in motion, materialism has been long since debunked by science. Gutting here offers Plantinga an opportunity to expound on one of his favorite theories, that evolution is incompatible with materialism. Plantinga declares that he cannot give a full account of his argument in the article (or anywhere else; the full account is only in his book, which I might look at later) but his summary is so full of holes I can't imagine even the full treatment can fill.

Briefly, Plantinga argues that the content of a belief is distinct from its neurological properties, and that content has no causal effect. Two different neurological beliefs could have identical causal effects but different content. Because content has no causal effect, evolution cannot select for content. Therefore, the content of our belief in evolution was not selected for, therefore we have no reason to believe the content of our belief in evolution is true.

This argument, however, has a number of flaws, not only that no one is really a "materialist." What, precisely, does Plantinga mean by the content of a belief? If the content of a belief has no causal effect, then a naturalist would claim then that by definition we cannot determine what the content of a belief is, much less determine whether it is true. Indeed, the naturalist would say that "existence" cannot properly be applied to anything with no causal effect: the existence of a thing with no causal effect is indistinguishable from its nonexistence.

But perhaps "content" is epiphenomenal or emergent, in the same sense that the roundness of an inflated basketball is emergent or epiphenomenal: it is not the roundness per se that causes a basketball to bounce regularly, it is the atoms in the basketball that causes it to bounce regularly. However, the roundness of a basketball is fully determined by the arrangement of the basketball's atoms; change their arrangement to something not round, like an (American) football, and the ball will no longer bounce regularly. Thus, if the neurological properties of a belief determine the content of a belief (e.g. we cannot say that two identical neurological beliefs have different content), and evolution can select on the causal properties of a belief, then evolution does affect the content of belief, albeit indirectly. So Plantinga's argument simply fails.

I've been following Plantinga's evolutionary argument for many years, and he has not, to my knowledge, substantively altered his summary to address any criticism; the summary he offers to Gutting is substantively identical to summaries I read years ago. This is the best sophisticated philosophy that the theists have, and it's nothing but a mass of sloppy reasoning that an economist can demolish in post.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "bad philosophy, philosophy, religion"
Send by mail Print  Save  Delicious 
Date: Sunday, 09 Feb 2014 06:37
New Atheism is a political movement. It is about changing our ethical norms, cultural values, and, to the extent of strongly defending the separation of church and state, the behavior of governments. We have some ancillary goals, but primarily, we want to create a society where it is completely ridiculous for anyone to put his (or occasionally her) collar on backwards and tell us what an invisible man in the sky wants us to do with our money and our reproductive apparatus. We are unashamedly attacking a collection of institutions that do precisely that: institutions that assert there really is a God, and He really does want only one penis to only go in one vagina (and don't you dare do it just for fun), He wants us to kill people who have a slightly incorrect opinion about Him, He wants a woman's uterus to be the property of the State, He wants his special guys to get away with the physical, mental, emotional, and sexual abuse of children, and He needs money. The New Atheists see these institutions, and we want them gone; our argument is that it is ridiculous to suggest that God wants us to do anything, because there is no God to want anything.

The usual straw men and obviously fallacious arguments aside, there are three broad criticisms of New Atheism. The most obvious is that there are religious people who are not racist, sexist, homophobic, nor prudish, who do not suborn child abuse, who do not want to murder anyone, and who do not want to fleece the rubes. No one has ever argued that religion automatically turns everyone into the worst sort of bigot and criminal. Furthermore, there are no small few atheists who really are bigots, criminals, or just plain mean and selfish. Again granted. No one has ever argued that atheism automatically turns everyone into the best sort of generous, kind, and virtuous citizen. The New Atheist argument is much simpler. There is no way at all, despite millennia of trying, to consistently discover what God wants (which is a big part of why we don't believe any such being exists). Once anyone says that he (or occasionally she) knows what God wants, even if what he says sounds good, he has no basis at all to argue against someone who says God wants something different. There is no way of telling. Furthermore, if God wants exactly what we want, then there's no reason to invoke Him: we can do things we want just because we ourselves want to do them. Indeed, Bob Avakian makes this point forcefully in Away with All Gods: scratch a "liberal" (American) Christian, especially their leadership, and you will often discover God-justified sexism or homophobia. We're pleased that not every Christian is Fred Phelps, but we say that the liberal Christians are using the same arguments that Phelps uses, and we cannot tell who is correct about God. Arguing that God wants us to do good stuff reinforces arguments that God wants us to do bad stuff because it assumes that it is important for us to find out what God wants. The New Atheists go straight to the root: there is no God, there is no way of discovering what God wants us to do, so all the arguments, even the ones that assert that God wants us to do good stuff, fail. Want what you want, but do it on your own nickel, not God's.

The second argument is that we ignore "sophisticated" theology and philosophy. But New/Gnu/Undergraduate Atheism is not a philosophical movement; we are not out to solve the philosophical problem of God. Millennia of theological and philosophical speculation have failed to move even an inch forward on saying anything consistent about God; it's time to move on. Even "sophisticated" theologians and accommodationist philosophers admit that the idea of a personal God — the king of personal God that billions of people believe — is untenable. But the kind of "god" the "sophisticated" theologians describe is at best the weakest of tea, at worst vacuous nonsense. The ground of all being? Seriously? More importantly, nobody but sophisticated theologians care about sophisticated theology: the actual content of sophisticated theology has no political, cultural, or social relevance whatsoever. As far as we New Atheists are concerned, the philosophical issue is solved: there are no gods worth talking about. We've seen the arguments for sophisticated theology, and, frankly, we think they're as silly and irrelevant as arguments over how many angels can dance on the head of a pin. Some people might want to work out their existential angst through metaphysical or theological speculation, but most New Atheists don't want to waste our time. We are busy attacking the real foundations of real institutions that do real harm.

I have to ask, however: If the billions of believers in a personal God — the kind of God the New Atheists attack — really have God so completely wrong, why criticize prominent New Atheists, especially their favorite target, Richard Dawkins? Do we not have a common enemy, people who have got God so completely wrong? Dawkins and most New Atheists have explicitly declared that we're not particularly interested in "sophisticated" theology; why not go after FEMA, or NASA, or Greenpeace, who are equally uninterested? There can be only three reasons. First, the assertion that there is no personal God somehow undermines the faith of those who believe in some kind of "sophisticated" metaphysical, ground-of-all-being, cosmic-purpose God that nobody but they themselves care about. Maybe so, but it seems the most fragile faith that is undermined by simply being ignored. The second is that they themselves want to assert religious moral authority. If so, they are spectacularly ineffective. The third is that this metaphysical god actually supports the moral authority of the religious institutions the New Atheists struggle against. By failing to challenge metaphysical theology, the moral authority of religious institutions — with whom "sophisticated" theologians supposedly profoundly disagree — remains strong. These complaints seem either childish or mendacious.

The third argument is that religion, however false, is an indispensable social construct. The argument cannot be that people and institutions that call themselves religious sometimes do good. Every human social construct that lasts more than a generation or so does some good; no one would argue, for example, that we should have perpetuated antebellum slavery because otherwise all the black people in the South would have been out of work and starved. The argument has to be that we must tolerate the falsity of religion, its misogyny, racism, murderous intolerance of dissent, parasitism to obtain the benefits it offers. This is, I think, a very tough case to make. The argument that surgically removing religion from society, changing nothing else, would leave some unmet needs seems obviously specious. We cannot surgically remove religion, and there is plenty of time to meet or even eliminate the needs, such as comfort for the poorest, that religion presently provides. We don't say that religion is all bad; we say that we can do the good that religion does, and do it much better.

The New Atheists are struggling against institutions that we believe are bad, in any sane, civilized, and reasonable sense of bad. We are struggling against institutions that support and defend murder, misogyny, sexism, racism, absurd sexual prudishness, parasitism, oppression, and exploitation. If people disagree with us, if they think these institutions are doing good, that we ought to murder people who disagree with us, that we ought to subordinate women, that we ought to subordinate people of different races, that we ought to restrict our sexuality to a very narrow band, that we ought to do nothing for those most abused and oppressed by society but give them a comforting delusion, and that we ought to pay real money for a bunch of men to tell us that that is what God wants, then by all means oppose us, and stand between us and the churches. If, however, people agree with us, that these institutions are a blight on our society, then at the very least, stand out of our way, or, better yet, join the fray on the right side.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "religion"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Feb 2014 05:04
Commenter HH disagrees with my condemnation of Nancy McDermott's essay, The Triumph of the Maternalists:
I would have to disagree with your evaluation of that article. Some of it is a bit of a strech but the general charge seem reasonably solid.

The way in which sexual assault is handled now on some US campuses is just ridiuclous. It seems a trading of the rights of the accused because more convictions are desired.

HH is, I think, mistaken in three ways.

First, granting arguendo that some US campuses handle sexual assault in a ridiculous manner, such a premise does not prove the author's thesis that we are somehow destroying autonomy, rationality, or Enlightenment values. People and institutions do stupid shit all the time without destroying civilization as we know it.

Second, I don't know that any US campus actually does handle sexual assault in a ridiculous manner. I'm not saying I know they don't, but I've never seen a news report of anyone unjustly accused or convicted of sexual assault on a college campus. (Note that unjustly convicted is different from wrongly convicted.) I have, however, seen a fair number of news reports where rather obvious sexual assault was un- or under-prosecuted, and the victims harassed and bullied for reporting the assault. I could be swayed by evidence here, but I haven't yet seen it.

The most important mistake, however, is a fundamental misunderstanding of what colleges and universities are. I touched on this point in my previous commentary, and I want to expand it now.

A university is not a commons; it is a professional environment. Every professional environment has norms and standards that go beyond universal legal norms. McDermott highlights one such norm: Maternalistic authoritarianism
has so imbued the culture of American college campuses that students accused of sexual misconduct are routinely deprived of their rights, considered guilty until proven innocent, deprived of representation and not permitted to give evidence on their own behalf. The Office of Civil Rights, the US government’s oversight body governing Title IX (the provision that banned discrimination on the basis of sex), recently praised the University of Montana’s definition of sexual harassment, which was so broad it included verbal conduct regardless of the intention of the speaker or whether anyone was offended. The Office of Civil Rights called this a ‘blueprint’ for other policies around the country.
As I said before, I've seen no evidence that students are ever, much less, routinely deprived of their rights. (What makes McDermott's article atrocious is that she presents a controversial position as established fact.)

McDermott juxtaposes her charge of deprivation of rights with a mention of the University of Montana's definition of sexual harassment and the Office of Civil Rights' approval. But the university's definition has nothing to do with any procedural rights mentioned in the preceding sentence. More importantly, the university's definition is completely appropriate for a professional environment. People in a professional environment assume obligations to the other members that go far beyond the negative legal limitations and procedural requirements applied to autonomous citizens in the commons. When people join a professional organization, they must necessarily surrender part of their individual autonomy: they must work toward not their own personal goals, but toward the goal of the organization or institution.

I can write the most vile and hateful opinions on this blog, and the United States and my own state will not lift a finger to punish me. If someone brings a criminal charge against me, they must satisfy procedural and substantive requirements before the court will punish me; even if I did the deed, if it cannot be legally proven, I will not be punished. If, however, I were to write vile and hateful things about my coworkers, my employer would be perfectly justified in firing me not for committing a crime, but for violating the basic standards of professionalism. And if one of my coworkers and I have a conflict, my employer must resolve the conflict; they cannot just say (as the court can say), "Not proven: work it out yourselves." Finally, my employer has to maintain a professional environment, even if a specific act does not actually harm anyone. Even if nobody really cares, I cannot walk into work in raggedy cutoffs and a dirty tank tee-shirt. (Yes, it is true that in a capitalist society, employers and other organizations routinely leverage their legitimate right to maintain a professional environment to oppress their workers, but the necessity of a professional environment still stands.)

When you enter any professional environment, you have a positive obligation to learn what it means to be a professional in that environment, and you have a positive obligation to act professionally. Ignorance is no excuse, and lack of specific harm is no excuse.

We have dismantled the legal structures barring women from fully participating in civil society. Women may not be legally barred from any occupation or any social role. However, although we have dismantled the legal barriers, there are still profound social barriers that push women out of many areas of civil society. The most egregious of those social barriers is sexual harassment. Because it tends to discourage women from full civic participation, it is intolerable. If you do not know that sexual harassment is a real problem, you are not merely ignorant, you are willfully ignorant. If I have offended you, good.

It doesn't matter if you did not intend to harass your coworker when you said to her, "Nice ass!" It doesn't matter that that particular women did not take offense at your comment. The comment itself is still unprofessional. If you worked for me, and I heard you say that, even if I didn't fire you on the spot, you would have to eat a ton of shit to keep your job. If you think I'm an asshole for that stance, good. I like being an asshole to sexist douchebags.

(McDermott also condemns California for restricting certain legal procedures, "the right [of victims] to refuse interview, deposition or discovery requests" from defendants charged with rape. But our legal procedure was not handed down by God. Equality under the law does not depend on every crime having the exact same set of procedural guarantees, especially when there is overwhelming evidence that procedures such as interviews, depositions, and discovery have been used not to seek justice but to harass, intimidate, and discourage women from prosecuting accusations of rape.

Here's a tip for you guys: if you think it's consensual, but the situation might be misconstrued as assault, DON'T HAVE SEX WITH HER. If you don't trust her not to fabricate a charge of rape, DON'T HAVE SEX WITH HER. If you think these standards will prevent you from ever having sex again, hold a seashell to your ear: you will hear the voices of 3.5 billion women breathing a sigh of relief and gratitude.)

We men, with our privilege, have the positive obligation to bend over backwards to include women in civil society. If you do not accept this obligation, then fuck you. The only "right" you're losing is the right to treat women as less than human beings.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "feminism"
Send by mail Print  Save  Delicious 
Date: Monday, 03 Feb 2014 03:43
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "links"
Send by mail Print  Save  Delicious 
Date: Saturday, 01 Feb 2014 06:48
The Triumph of the Maternalists

Jumpin' Jebus on a Pogo Stick! This article is so bad, I don't know where to begin. Sadly, I have no time for deep analysis. Some quick thoughts:

The author (sensibly) notes that "feminine" and "masculine" values have nothing to do with vaginas and penises, but then keeps the gendered labeling throughout the article.

Just because one labels a value as an "Enlightenment value" doesn't make it one. The Enlightenment value of freedom of speech, for example, is subtle and nuanced. It does not mean that anyone can say anything anywhere.

Even if a value really is an "Enlightenment value" doesn't mean it's a value we must necessarily keep. The Enlightenment thinkers made a lot of good arguments, but they are not prophets, and early 21st century society is very different from late 18th.

The author makes many definitive statements about controversial positions, without even attempting an argument. For example:
Few people have the stomach to defend Western cultural and political ideals, even in the face of violent, nihilistic outbursts like the 9/11 attacks. Instead, key sections of the elite have embraced emotionalism, difference, authenticity and sustainability.
What the fuck?! What does this even mean?
Has the author even heard of the Department of Homeland Security, the NSA, Guantanamo Bay, the wars in Afghanistan and Iraq, etc. ad nauseam?

Universities are not newspapers or common carriers. Most are government institutions, and they do not exist to provide a platform for the airing of every possible point of view. Academia exists primarily as a privilege granting institution, providing an entree into the professional-managerial middle class. They are secondarily an institution for evidence- and argument-based expression. I really don't see it as horrible oppression that I have an obligation to bend over backwards to act as a professional on my campus. If you want to say whatever you like, with the minimum of constraint, you are free to say it outside academia, just as you are free to say it without spray painting it on my house.

Too busy to find the TSIB graphic, but this post still gets the tag.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "bad journalism, bad philosophy, egregiou..."
Send by mail Print  Save  Delicious 
Date: Wednesday, 29 Jan 2014 04:54
The Real State of the Union

By Hugh

The state of the Union is crap. 20% of the country is doing OK. 1% is doing fantastically. 0.001% is doing so well it’s criminal, literally. They don’t own everything yet but they do own the politicians, judges, regulators, academics, and reporters. So they’re getting there. The other 80%, the rubes, the muppets, the serfs, are mired in an undeclared, ongoing depression.

Hugh goes on, and it gets more depressing, and more persuasive.
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "pessimism, politics"
Send by mail Print  Save  Delicious 
Date: Monday, 27 Jan 2014 03:53
Life should not be a journey to the grave with the intention of arriving safely in a pretty and well preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming, ‘Wow! What a Ride!’

— Hunter S. Thompson
Author: "Larry, The Barefoot Bum (noreply@blogger.com)" Tags: "quotations"
Send by mail Print  Save  Delicious 
Next page
» You can also retrieve older items : Read
» © All content and copyrights belong to their respective authors.«
» © FeedShow - Online RSS Feeds Reader