Thursday, April 27, 2017

The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot

Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.

Explicitly acknowledging such tradeoffs is unpleasant -- sufficiently unpleasant that it's tempting to try to rationalize them away. It's distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I'll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.

Today I'll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you'll find these techniques useful too!

The Happy Coincidence Defense. Consider travel for work. I don't have to travel around the world, giving talks and meeting people. It's not part of my job description. No one will fire me if I don't do it, and some of my colleagues do it considerably less than I do. On the face of it, I seem to be prioritizing my research career at the cost of being a somewhat less good father, teacher, and global moral citizen (given the luxurious use of resources and the pollution of air travel).

The Happy Coincidence Defense says, no, in fact I am not sacrificing these other goals at all! Although I am away from my children, I am a better father for it. I am a role model of career success for them, and I can tell them stories about my travels. I have enriched my life, and then I can mingle that richness into theirs. I am a more globally aware, wiser father! Similarly, although I might cancel a class or two and de-prioritize my background reading and lecture preparation, since research travel improves me as a philosopher, it improves my teaching in the long run. And my philosophical work, isn't that an important contribution to society? Maybe it's important enough to morally justify the expense, pollution, and waste: I do more good for the world traveling around discussing philosophy than I could do leading a more modest lifestyle at home, donating more money to charities, and working within my own community.

After enough reflection of this sort, it can come to seem that I am not making any tradeoffs at all among these four things I care intensely about. Instead, I am maximizing them all! This trip to England is the best thing I can do, all things considered, as a philosopher and as a father and as a teacher and as a citizen of the moral community. Yay!

Now that might be true. If so, that would be a happy coincidence. Sometimes there really are such happy coincidences. But the pattern of reasoning is, I think you'll agree, suspicious. Life is full of tradeoffs among important things. One cannot, realistically, always avoid hard choices. Happy Coincidence reasoning has the odor of rationalization. It seems likely that I am illegitimately convincing oneself that something I want to be true really is true.

The-Most-I-Can-Do Sweet Spot. Sometimes people try so hard at something that they end up doing worse as a result. For example, trying too hard to be a good father might make you in a father who is overbearing, who hovers too much, who doesn't give his children sufficient distance and independence. Teaching sometimes goes better when you don't overprepare. And sometimes, maybe, moral idealists push themselves so hard in pursuit of their ideals that they would have been better off pursuing a more moderate, sustainable course. For example, someone moved by the arguments for vegetarianism who immediately attempts the very strictest veganism might be more likely to revert to cheeseburger eating after a few months than someone who sets their sights a bit lower.

The-Most-I-Can-Do Sweet Spot reasoning harnesses these ideas for convenient self-defense: Whatever I'm doing right now is the most I can realistically, sustainably do! Were I to try any harder to be a good father, I would end up being a worse father. Were I to spend any more time reading and writing philosophy than I actually do, I would only exhaust myself. If I gave any more to charity, or sacrificed any more for the well-being of others in my community, then I would... I would... I don't know, collapse from charity-fatigue? Or seethe so much with resentment at how more awesomely moral I am than everyone else that I'd be grumpy and end up doing some terrible thing?

As with Happy Coincidence reasoning, The-Most-I-Can-Do Sweet Spot reasoning can sometimes be right. Sometimes you really are doing the most you can do about everything you care intensely about. But it would be kind of amazing if this were reliably the case. It wouldn't be that hard for me to be a somewhat better father, or to give somewhat more to my students -- with or without trading off other things. If I reliably think that wherever I happen to be in such matters, that's the Sweet Spot, I am probably rationalizing.

Having cute names for these patterns of rationalization better helps me spot them as they are happening, I think -- both in myself and sometimes, I admit, somewhat uncharitably, also in others.

Rather than think of something clever to say as the kicker for this post, I think I'll give my family a call.

Friday, April 21, 2017

Common Sense, Science Fiction, and Weird, Uncharitable History of Philosophy

Philosophers have three broad methods for settling disputes: appeal to "common sense" or culturally common presuppositions, appeal to scientific evidence, and appeal to theoretical virtues like simplicity, coherence, fruitfulness, and pragmatic value. Some of the most interesting disputes are disputes in which all three of these broad methods are problematic and seemingly indecisive.

One of my aims as a philosopher is to intervene on common sense. "Common sense" is inherently conservative. Common sense used to tell us that the Earth didn't move, that humans didn't descend from ape-like ancestors, that certain races were superior to others, that the world was created by a god or gods of one sort or another. Common sense is a product of biological and cultural evolution, plus the cognitive and social development of people in a limited range of environments. Common sense only has to get things right enough, for practical purposes, to help us manage the range of environments to which we are accustomed. Common sense is under no obligation to get it right about the early universe, the microstructure of matter, the history of the species, future technologies, or the consciousness of weird hypothetical systems we have never encountered.

The conservativism and limited vision of common sense leads us to dismiss as "crazy" some philosophical and scientific views that might in fact be true. I've argued that this is especially so regarding theories of consciousness, about which something crazy must be true. For example: literal group consciousness, panpsychism, and/or the failure of pain to supervene locally. Although I don't believe that existing arguments decisively favor any of those possibilities, I do think that we ought to restrain our impulse to dismiss such views out of hand. Fit with common sense is one important factor in evaluating philosophical claims, especially when direct scientific evidence and considerations of general theoretical virtue are indecisive, but it is only one factor. We ought to be ready to accept that in some philosophical domains, our commonsense intuitions cannot be entirely preserved.

Toward this end, I want to broaden our intuitive sense of the possible. The two best techniques I know are science fiction and cross-cultural philosophy.

The philosophical value of science fiction consists not only in the potential of science fictional speculations to describe possible futures that we might actually encounter. Historically, science fiction has not been a great predictor of the future. The primary philosophical value of science fiction might rather consist in its ability to flex our minds and disrupt commonsense conservatism. After reading far-out stories about weird utopias, uploading into simulated realities, bizarrely constructed intelligent aliens, body switching, Matrioshka Brains, and alternative universes, philosophical speculations about panpsychism and group consciousness no longer seem quite so intolerably weird. At least that's my (empirically falsifiable) conjecture.

Similarly, brain-flexing is an important part of the value of reading the history of philosophy -- especially work from traditions other than those with which you are already familiar. Here it's especially important not to be too "charitable" (i.e. assimilative). Relish the weirdness -- "weird" from your perspective! -- of radical Buddhist metaphysics, of medieval Chinese neo-Confucianism, of neo-Platonism in late antiquity, of 19th century Hegelianism and neo-Hegelianism.

If something that seems crazy must be true about the metaphysics of consciousness, or about the nature of objects and causes, or about the nature of moral value -- as extended philosophical discussions of these topics suggest probably is the case -- then to evaluate the possibilities without excess conservatism, we need to get used to bending our minds out of their usual ruts.

This is my new favorite excuse for reading Ted Chiang, cyberpunk, and Zhuangzi.

[image source]

Friday, April 14, 2017

We Who Write Blogs Recommend... Blogs!

Here's The 20% Statistician, Daniel Lakens, on why blogs have better science than Science.

Lakens observes that blogs (usually) have open data, sources, and materials; open peer review; no eminence filter; easy error correction; and open access.

I would add that blogs are designed to fit human cognitive capacities. To reach a broad audience, they are written to be broadly comprehensible -- and as it turns out, that's a good thing for science (and philosophy), since it reduces the tendency to hide behind jargon, technical obscurities, and dubious shared subdisciplinary assumptions. The length of a typical substantive blog post (500-1500 words) is also, I think, a good size for human cognition: long enough to have some meat and detail, but short enough that the reader can keep the entire argument in view. These features make blog posts much easier to critique, enabling better evaluation by specialists and non-specialists alike.

Someone will soon point out, for public benefit, the one-sidedness of Lakens' and my arguments here.

[HT Wesley Buckwalter]

Sunday, April 09, 2017

Does It Matter If the Passover Story Is Literally True?

My opinion piece in today's LA Times.

You probably already know the Passover story: How Moses asked Pharoah to let his enslaved people leave Egypt, and how Moses’ god punished Pharaoh — bringing about the death of the Egyptians’ firstborn sons even as he passed over Jewish households. You might even know the ancillary tale of the Passover orange. How much truth is there in these stories? At synagogues this time of year, myth collides with fact, tradition with changing values. Negotiating this collision is the puzzle of modern religion.

Passover is a holiday of debate, reflection, and conversation. Last Passover, as my family and I and the rest of the congregation waited for the feast at our Reform Jewish temple, our rabbi prompted us: “Does it matter if the story of Passover isn’t literally true?”

Most people seemed to shake their heads. No, it doesn’t matter.

I was imagining the Egyptians’ sons. I am an outsider to the temple. My wife and teenage son are Jewish, but I am not. My 10-year-old daughter, adopted from China at age 1, describes herself as “half Jewish.”

I nodded my head. Yes, it does matter if the Passover story is literally true.

“Okay, Eric, why does it matter?” Rabbi Suzanne Singer handed me the microphone.

I hadn’t planned to speak. “It matters,” I said, “because if the story is literally true, then a god who works miracles really exists. It matters if there is such a god or not. I don’t think I would like the moral character of that god, who kills innocent Egyptians. I’m glad there is no such god.”

“It is odd,” I added, “that we have this holiday that celebrates the death of children, so contrary to our values now.”

The microphone went around, others in the temple responding to me. Values change, they said. Ancient war sadly and necessarily involved the death of children. We’re really celebrating the struggle for freedom for everyone....

Rabbi Singer asked if I had more to say in response. My son leaned toward me. “Dad, you don’t have anything more to say.” I took his cue and shut my mouth.

Then the Seder plates arrived with the oranges on them.

Seder plates have six labeled spots: two bitter herbs, charoset (fruit and nuts), parsley, a lamb bone, a boiled egg, each with symbolic value. There is no labeled spot for an orange.

The first time I saw an orange on a Seder plate, I was told this story about it: A woman was studying to be a rabbi. An orthodox rabbi told her that a woman belongs on the bimah (pulpit) like an orange belongs on the Seder plate. When she became a rabbi, she put an orange on the plate.

A wonderful story — a modern, liberal story. More comfortable than the original Passover story for a liberal Reform Judaism congregation like ours, proud of our woman rabbi. The orange is an act of defiance, a symbol of a new tradition that celebrates gender equality.

Does it matter if it’s true?

Here’s what actually happened. Dartmouth Jewish Studies professor Susannah Heschel was speaking to a Jewish group at Oberlin College in Ohio. The students had written a story in which a girl asks a rabbi if there is room for lesbians in Judaism, and the rabbi rises in anger, shouting, “There’s as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!” Heschel, inspired by the students but reluctant to put anything as unkosher as leavened bread on the Seder plate, used a tangerine instead.

The orange, then, is not a wild act of defiance, but already a compromise and modification. The shouting rabbi is not an actual person but an imagined, simplified foe.

It matters that it’s not true. From the two stories of the orange, we learn the central lesson of Reform Judaism: that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, changing as our values change — but also that only limited change is possible in a tradition-governed institution. An orange, but not a crust of bread.

In a way, my daughter and I are also oranges: a new type of presence in a Jewish congregation, without a marked place, welcomed this year, unsure we belong, at risk of rolling off.

In the car on the way home, my son scolded me: “How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously! You should have seen how they were looking at you, with so much anger. If you’d said more, they would practically have been ready to lynch you.”

Due to the seating arrangement, I had been facing away from most of the congregation. I hadn’t seen those faces. Were they really so outraged? Was my son telling me the truth on the way home that night? Or was he creating a simplified myth of me?

In belonging to an old religion, we honor values that are no longer entirely ours. We celebrate events that no longer quite make sense. We can’t change the basic tale of Passover. But we can add liberal commentary to better recognize Egyptian suffering, and we can add a new celebration of equality.

Although the new celebration, the orange, is an unstable thing atop an older structure that resists change, we can work to ensure that it remains. It will remain only if we can speak the story of it compellingly enough to give our new values too the power of myth.


Revised and condensed from my blogpost Orange on the Seder Plate (Apr 27, 2016).

Wednesday, April 05, 2017

Only 4% of Editorial Board Members of Top-Ranked Anglophone Philosophy Journals Are from Non-Anglophone Countries

If you're an academic aiming to reach a broad international audience, it is increasingly the case that you must publish in English. Philosophy is no exception. This trend gives native English speakers an academic advantage: They can more easily reach a broad international audience without having to write in a foreign language.

A related question is the extent to which people who make their academic home in Anglophone countries control the English-language journals in which so much of our scholarly communication takes place. One could imagine the situation either way: Maybe the most influential academic journals in English are almost exclusively housed in Anglophone countries and have editorial boards almost exclusively composed of people in those same countries; or maybe English-language journals are a much more international affair, led by scholars from a diverse range of countries.

To examine this question, I looked at the editorial boards of the top 15 ranked journals in Brian Leiter's 2013 poll of "top philosophy journals without regard to area". I noted the primary institution of every board member. (For methodological notes see the supplement at the end.)

In all, 564 editorial board members were included in the analysis. Of these, 540 (96%) had their primary academic affiliation with an institution in an Anglophone country. Only 4% of editorial board members had their primary academic affiliation in a non-Anglophone country.

The following Anglophone countries were represented:

USA: 377 philosophers (67% of total)
UK: 119 (21%)
Australia: 26 (5%)
Canada: 13 (2%)
New Zealand: 5 (1%)

The following non-Anglophone countries were represented:

Germany: 6 (1%)
Sweden: 5 (1%)
Netherlands: 3 (1%)
China (incl. Hong Kong): 2 (<1%)
France: 2 (<1%)
Belgium: 1 (<1%)
Denmark: 1 (<1%)
Finland: 1 (<1%)
Israel: 1 (<1%)
Singapore: 1 (<1%) [N.B.: English is one of four official languages]
Spain: 1 (<1%)

Worth noting: Synthese showed much more international participation than any of the other journals, with 13/31 (42%) of its editorial board from non-Anglophone countries.

It seems to me that if English is to continue in its role as the de facto lingua franca of philosophy (ironic foreign-language use intended!), then the editorial boards of the most influential journals ought to reflect substantially more international participation than this.


Related Posts:

How Often Do Mainstream Anglophone Philosophers Cite Non-Anglophone Sources? (Sep 8, 2016)

SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (Aug 14, 2014)


Methodological Notes:

The 15 journals were Philosophical Review, Journal of Philosophy, Nous, Mind, Philosophy & Phenomenological Research, Ethics, Philosophical Studies, Australasian Journal of Philosophy, Philosopher's Imprint, Analysis, Philosophical Quarterly, Philosophy & Public Affairs, Philosophy of Science, British Journal for the Philosophy of Science, and Synthese. Some of these journals are "in house" or have a regional focus in their editorial boards. I did not exclude them on those grounds. It is relevant to the situation that the two top-ranked journals on this list are edited by the faculty at Cornell and Columbia respectively.

I excluded editorial assistants and managers without without full-time permanent academic appointments (which are typically grad students or publishing or secretarial staff). I included editorial board members, managers, consultants, and staff with full-time permanent academic appointments, including emeritus.

I used the institutional affiliation listed at the journal's "editorial board" website when that was available (even in a few cases where I knew the information to be no longer current), otherwise I used personal knowledge or a web search. In each case, I tried to determine the individual's primary institutional affiliation or most recent primary affiliation for emeritus professors. In a few cases where two institutions were about equally primary, I used the first-listed institution either on the journal's page or on a biographical or academic source page that ranked highly in a Google search for the philosopher.

I am sure I have made some mistakes! I've made the raw data available here. I welcome corrections. However, I will only make corrections in accord with the method above. For example, it is not part of my method to update inaccurate affiliations on the journals' websites. Trying to do so would be unsystematic, disproportionately influenced by blog readers and people in my social circle.

A few mistakes are inevitable in projects of this sort and shouldn't have a large impact on the general findings.


[image source]

Thursday, March 30, 2017

On Being Accused of Ableism

Like many (most?) 21st-century North Americans, I hate to be told I’ve done something ableist (or racist, or sexist). Why does it sting so much, and how should I think about such a charge, when it is leveled against me?

Short answer: It stings so much because it’s usually partly, if only partly, true—and partly true criticisms are the ones that sting worst. And the best reaction to the charge is, usually, to recognize its partial, if only partial, truth.

First, let’s remind ourselves of a quote from the great Confucius:

How fortunate I am! If I happen to make a mistake, others are sure to inform me.
(Analects 7.31, Slingerland trans.)

(As it happens, bloggers are fortunate in just the same way.)

Confucius might have been speaking partly ironically in that particular passage. A couple of centuries later, another Confucian, Xunzi, speaks not at all ironically:

He who rightly criticizes me acts as a teacher to me, and he who rightly supports me acts as friend to me, while he who flatters and toadies to me acts as a villain toward me. Accordingly, the gentleman exalts those who act as teachers toward him....
(ch 2, Hutton trans., p. 9)

This is difficult advice to heed.

Note, though: If I make a mistake. He (she, they) who rightly criticizes me. Someone who criticizes me wrongly is no teacher, only an annoying pest! And if you’re anything like me, then your gut reaction to charges of ableism will usually be to want to swat back at the pest, to assume, defensively, that the criticism must be off-target, because of course you’re a good egalitarian, committed to fighting unjustified prejudice!

No. Here’s the thing. We all have ableist reactions and engage in ableist practices sometimes, to some degree. Disability is so various, and the ableist structures of our culture so deep and pervasive, that it would be superhuman to be immune. Maybe you are immune to ableism toward people who use wheelchairs. Maybe your partner of many years uses a wheelchair and you see wheelchair-use as just one of the many diverse human ways of comporting oneself, with its challenges and (sometimes) benefits, just like every other way of getting around. But how do you react to someone who stutters? How do you react to someone who is hard of hearing? How do you react to someone with depression or PTSD? Someone with facial burns or another skin condition you find unappealing? Or a very short man? What sorts of social structures do you manifest and reinforce in your behavior? In your choice of words? In your implicit assumptions? In what you expect (and don’t expect) people to be able to do?

Here’s my guess: You don’t always act in ways that are free of unjustified prejudice. If someone calls you out on ableism, they might well be right.

You might sincerely and passionately affirm that "all people are equal"—whatever that amounts to, which is really hard to figure out!—and you might even pay some substantial personal costs for the sake of a more just and equal society. In this respect, you are not ableist. You are even anti-ableist. But you are not a unified thing. Unless you are an angel walking upon the Earth, our society’s ableism acts through you.

An absurd charge does not sting. If someone tells me I spend too much time watching soccer, the charge is merely ridiculous. I don’t watch soccer. But if someone charges me with ableism, the partial truth of it does sting, or at least the plausibility of it stings. Maybe I shouldn’t have used the particular word that I used. Maybe I shouldn’t have made that particular assumption or dismissed that particular person. Maybe, deep down, I’m not the egalitarian I thought I was. Ouch.

Your ableist actions and reactions can be hard to recognize and admit if you implicitly assume that people have unified attitudes. If people have unified attitudes, they are either prejudiced against disabled people or they are not. If people have unified attitudes, then evidence of ableist behavior is evidence that you are one of the prejudiced, one of the bad guys. No one wants to think that about themselves. If people have unified attitudes, then it’s easy to assume that because you explicitly reject ableism you cannot be simultaneously enacting the very ableism that you are fighting against.

[Image description: psychedelic art "shifting realities", explosion of mixing colors, white on right through blue on the left]

The best empirical evidence suggests that people are highly disunified—inconstant across situations, capable of both great sacrifice and appalling misbehavior, variable in word and deed, spontaneously enacting our cultural practices for both good and bad. If this is true, then you ought to expect that charges of ableism against you will sometimes stick. You should be unsurprised if they do. But you should also celebrate that these charges are only very partial: The whole you is not like that! The whole you is a tangled chaos with many beautiful, admirable parts!

If you accept your disunity, you ought also to be forgiving. You ought to be forgiving especially if you cast your eye more broadly to the many forms of prejudice and injustice in which we participate. Suppose, impossibly, that you were utterly free of any ableist tendencies, practices, or background assumptions. It would be a huge life project to achieve that. Are you equally free of racism, classism, sexism, ageism, bias against those who are not conventionally beautiful? Are you saving the environment, fighting international poverty, phoning your senators about prisons and wage justice, volunteering in your community?

We must pick our projects. A more vivid appreciation of our own disunity, flaws, and abandoned good intentions ought to make us both more ready to see the truth in charges of prejudice against us and also more forgiving of the disunity, flaws, and abandoned good intentions in others.

[image source]

[Cross-posted at Discrimination and Disadvantage; HT Shelley Tremain for the invitation and editorial feedback]

Wednesday, March 22, 2017

What Kinds of Universities Lack Philosophy Departments? Some Data

University administrators sometimes think it's a good idea to eliminate their philosophy departments. Some of these efforts have been stopped, others not. This has led me to wonder how prevalent philosophy departments are in U.S. colleges and universities, and how their presence or absence relates to institution type.

Here's what I did. I pulled every ranked college and university from the famous US News college ranking site, sorting them into four categories: national universites, national liberal arts colleges, regional universities (combining the four US News categories for regional universities: north, south, midwest, and west), and regional colleges (again combining north, south, midwest, and west). I randomly selected twenty schools from each of these four lists. Then I attempted to determine from the school's website whether it had a philosophy department and a philosophy major. [See note 1 on "departments".]

Since some schools combine philosophy with another department (e.g. "Philosophy and Religion") I distinguished standalone philosophy departments from combined departments that explicitly mention "philosophy" in the department name along with something else.

I welcome corrections! The websites are sometimes a little confusing, so it's likely that I've made an error or two.



National Universities:

Eighteen of the twenty sampled "national universities" have standalone philosophy departments (or equivalent: note 1) and majors. The only two that do not are institutes of technology: Georgia Tech (ranked #34) and Florida Tech (#171).

Virginia Tech (#74), however, does have a Department of Philosophy and a philosophy major -- as do Stanford, Duke, Rice, Rochester, Penn State, UT Austin, Rutgers-New Brunswick, Baylor, U Mass Amherst, Florida State, Auburn, Kansas, Biola, Wyoming (for now), North Carolina-Charlotte, Missouri-St Louis, and U Mass Boston.

National Liberal Arts Colleges:

Similarly, seventeen of the twenty sampled "national liberal arts colleges" have standalone philosophy departments, and eighteen offer the philosophy major. Offering neither department nor major are Virginia Military Institute (#72) and the very small science/engineering college Harvey Mudd (#21) (circa 735 students, part of the Claremont consortium). Beloit College (#62, circa 1358 students) offers the philosophy major within a "Department of Philosophy and Religious Studies".

The seventeen sampled schools with both major and standalone department are: Swarthmore, Carleton, Hamilton, Wesleyan, Richmond, DePauw, Puget Sound, Westmont, Hollins, Lake Forest, Stonehill, Hanover, Guilford, Carthage, Oglethorpe, Franklin (not to be confused with Franklin & Marshall), and Georgetown College (not to be confused with Georgetown University).

Some of these colleges are very small. According to Wikipedia estimates, two have fewer than a thousand students: Hollins (639) and Georgetown (984). Another four are below 1300: Franklin (1087), Hanover (1133), Oglethorpe (1155), and Westmont (1298).

Regional Universities:

Nine of the twenty sampled regional universities have standalone philosophy departments, and another three have a combined department with philosophy in its name. Twelve offer the philosophy major (not exactly the same twelve). Seven offer neither major nor department: Ramapo College of New Jersey, Wentworth Institute of Technology, Delaware Valley University, Stephens College, Mount St Joseph, Elizabeth City State, and Robert Morris. Two of these are specialty schools: Wentworth is a technical institute, and Stephens specializes in arts and fashion.

Offering major and/or standalone or combined department: Simmons, Whitworth, Mansfield of Pennsylvania, Rosemont, U of Northwestern-St Paul, Central Washington, Towson, Ganon, North Park, Wisconsin-Oshkosh, Northern Michigan, Mount Mary, and Appalachian State.

Regional Colleges:

Seven of the twenty sampled regional colleges have a standalone philosophy department, and another four have a combined department with philosophy in its name. Seven offer a philosophy major, and one (Brevard) has a "Philosophy and Religion" major. Offering neither major nor department: California Maritime Academy, Marymount California U (not to be confused with Loyola Marymount), Paul Smith's College (not to be confused with Smith College), Alderson Broaddus, Dickinson State, North Carolina Wesleyan, Crown College, and Iowa Wesleyan. Four of these are specialty schools: California Maritime Academy and Marymount California each offer only six majors total, Paul Smith's focuses on tourism and service industries, and Iowa Wesleyan offers only three Humanities majors: Christian Studies, Digital Media Design, and Music.

Offering major and/or standalone or combined department: Carroll, Mount Union, Belmont Abbey, La Roche, St Joseph's, Blackburn, Messiah, Tabor, Ottawa University (not to be confused with University of Ottawa), Northwestern College (not to be confused with Northwestern University), and Cazenovia College.


In my sample of forty nationally ranked universities and liberal arts colleges, each one has a standalone philosophy department and offers a philosophy major, with the following exceptions: three science/engineering specialty schools, one military institute, and one school offering a philosophy major within a department of "Philosophy and Religious Studies".

Even among the smallest nationally ranked liberal arts colleges, with 1300 or fewer students, all have philosophy majors and standalone philosophy departments (or similar administrative units), with the exception of one science/engineering speciality college.

The schools that US News describes as "regional" are mixed. In this sample of forty, about half offer philosophy majors and about half have standalone philosophy departments. Among the fifteen with neither department nor major in philosophy, six are specialty schools.

I'll refrain from drawing causal or normative conclusions here.


Update 8:53 a.m.: Expanding the Sample:

I'm tempted to conclude that, with the exception of specialty schools, almost every nationally ranked university and liberal arts college, no matter how small, has a philosophy major and a large majority have a standalone philosophy department. But maybe that's too strong a claim to draw from a sample of forty? So I've doubled the sample.

Doubling the sample supports this claim. Among the additional twenty universities sampled, nineteen offer the philosophy major, and the one that does not, UC Merced, is a new campus that plans to add the philosophy major soon. Sixteen have standalone Philosophy Departments, and three have combined departments: Philosophy and Religion at Northeastern and Tulsa, Politics and Philosophy at University of Idaho. The sampled universities with both standalone philosophy departments and the philosophy major are Tennessee, Nevada-Reno, Colorado State, South Dakota, New Mexico, Dartmouth, UC San Diego, U of Oregon, Columbia, Indiana-Bloomington, Kentucky, Alabama-Huntsville, Brandeis, George Washington, Azusa Pacific, and UC Riverside.

Adding twenty more nationally ranked liberal arts colleges also confirms my initial results. Nineteen offer the major, with the only exception being Thomas Aquinas College, which appears to offer only one major to all students (Liberal Arts). Three colleges have combined departments, all with religion: Washington College, Wartburg, and College of Idaho. Sixteen have both major and standalone department: Wooster, Wheaton, Hampton-Sydney, Muhlenberg, Houghton, Colgate, Middlebury, Washington & Lee, New College of Florida, Transylvania, Sweet Briar, Knox College, Colorado College, Oberlin, Luther, and Pomona.


Note 1: Some schools don't appear to have "departments" or have very broad "departments" that encompass many majors. If a school had fewer than fifteen "departments" I attempted to assess whether it had a department-like administrative unit for philosophy, or if that assessment wasn't possible, whether it hosted a philosophy major apparently on administrative par with popular majors like psychology and biology.

[image source]

Thursday, March 16, 2017

My Defense of Anger and Empathy: Flanagan's, Bloom's, and Others' Responses

Last week I posted a defense of anger and empathy against recent critiques by Owen Flanagan and Paul Bloom. The post drew a range of lively responses in social media, including from Flanagan and Bloom themselves.

My main thought was just this: Empathy and anger are part of the rich complexity of our emotional lives, intrinsically valuable insofar as having rich emotional lives is intrinsically valuable.

We can, of course, also debate the consequences of empathy and anger, as Flanagan and Bloom do -- and if the consequences of one or the other are bad enough we might be better off in sum without them. But we shouldn't look only at consequences. There is also an intrinsic value in having a rich emotional life, including anger and empathy.

1. Adding Nuance.

I have presented Flanagan's and Bloom's views simply: Flanagan and Bloom argue against anger and empathy, respectively. Their detailed views are more nuanced, as one might expect. One interpretive question is whether it is fair to set aside this nuance in critiquing their views.

Well, how do they themselves summarize their views?

Flanagan argues in defense of the Stoic and Buddhist program of entirely "eliminating" or "extirpating" anger, against mainstream "containment" views which hold that anger is a virtue when it is moderate, appropriate to the situation, and properly contained (p. 160). Although this is where he puts his focus and energy, he adds a few qualifications like this: "I do not have a firm position [about the desirability of entirely extirpating anger]. I am trying to explore varieties of moral possibility that we rarely entertain, but which might be genuine possibilities for us" (p. 215).

Bloom titles his book Against Empathy. He says that "if we want to make the world a better place, then we are better off without empathy" (p. 3) and "On balance, empathy is a negative in human affairs" (p. 13). However, Bloom also allows that he wouldn't want to live in a world without empathy, anger, shame, or hate (p. 9). At several points, he accepts that empathy can be pleasurable and play a role in intimate relationships.

It's helpful to distinguish between the headline view and the nuanced view.

Here's what I think the typical reader -- including the typical academic reader -- recalls from their reading, two weeks later: one sentence. Maybe "Bloom is against empathy because it's so biased and short-sighted". Maybe "Flanagan thinks we should try to eliminate anger, like a Buddhist or Stoic sage". These are simplifications, but they come close enough to how Bloom and Flanagan summarize and introduce their positions that it's understandable if that's how readers remember their views. In writing academic work, especially academic work for a broad audience, it's crucial to keep our eye on the headline view -- the practical, memorable takeaway that is likely to be the main influence on readers' thoughts down the road.

As an author, you are responsible for both the headline view and the nuanced view. Likewise, as a critic, I believe it's fair to target the headline view as long as one also acknowledges the nuance beneath.

In their friendly replies on social media, both Bloom and Flanagan seemed to acknowledge the value of engaging first at the headline level; but they both also pushed me on the nuance.

Hey, before I go farther, let me not forget to be friendly too! I loved both these books. Of course I did. Otherwise, I wouldn't have spent my time reading them cover-to-cover and critiquing them. Bloom and Flanagan challenge my presuppositions in helpful ways, and my thinking has advanced in reacting to them.

For more on the downsides of nuance, see Kieran Healy.

2. Bloom's Response.

In this tweet, Bloom appears to be suggesting that empathy is fine as long as you don't use it to guide moral judgment. (He makes a similar claim in a couple of Facebook comments on my post.) Similarly, at the end of his book, he says he worries "that I have given the impression that I am against empathy" (p. 240). An understandable worry, given the title of his book! (I am sure he is aware of this and speaking partly tongue in cheek.) He clarifies that he is against empathy "only in the moral domain... but there is more to life than morality" (p. 240-241). Empathy, he says, can be an immense source of pleasure.

The picture seems to be that the world would be morally better without empathy, but that there can be excellent selfish reasons to want to experience empathy nonetheless.

If the picture here is that there are some decisions to which morality is irrelevant and that it's fine to be guided by empathy in those decisions, I would object as follows. Every decision is a moral decision. Every dollar you spend on yourself is a dollar that could instead be donated to a good cause. Every minute you spend is a minute you could have done something more kind or helpful than what you actually did. Every person you see, you could greet warmly or grumpily, give them a kind word or not bother. Of course, it's exhausting to think this way! But still, there is I believe no such thing as a morally innocent choice. If you purge empathy from moral decision-making you purge it from decision-making.

Here's what seems closer to right, to me -- and what I think is one of the great lessons of Bloom's book. Public policy decisions and private acts directed toward distant strangers (e.g., what charities to support) are perhaps, on average, better made in a mood of cool rationality, to the extent that is possible. But it's different for personal relationships. Bloom argues that empathy might make us "too-permissive parents and too-clingy friends" (p. 163). This is a possible risk, sure. Sometimes empathic feelings should be set aside or even suppressed. Of course, there are risks to attempting to set aside empathy in favor of cool rationality as well (see, e.g., Lifton on Nazi doctors). Let's not over-idealize either process! In some cases, it might be morally best to experience empathy and to be able to act otherwise if necessary, rather than not to feel empathy.

Furthermore, it might be partly constitutive of the syndrome of full-bodied friendship and loving-parenthood that one is prone to empathy. I am Aristotelian or Confucian enough to see the flourishing of such relationships as central to morality.

3. Flanagan's Response.

On Facebook, Flanagan also added nuance to his view, writing:

There are varieties of anger. 1. Payback anger - you hurt me, I hurt you; 2. Pain-passing -- I am hurting (not because of you) I pass pain to you. 3. Instrumental anger. I aim you to get you to do what is right (this might hurt your feelings etc. but that is not my aim; 4 Political anger. I am outraged at racist or sexist etc. practices and want them to end; 5. Impersonal anger. At the gods or heaven for awful states of affairs, the dying child. I am concerned about 1 & 2. I worry about 3-4 if and when the desire to pass pain or payback gets too much of a grip....

This is helpful -- and also not entirely Buddhist or Stoic (which of course is fine, especially since Flanagan presented his earlier arguments against anger as only something worth exploring rather than his final view).

In his thinking on this, Flanagan has partly been influenced by Myisha Cherry's and others' work on anger as a force for social change.

I appreciate the defense of anger as a path toward social justice. But I also want to defend anger's intrinsic value, not just its instrumental value; and specifically I want to defend the intrinsic value of payback anger.

The angry jerk is an ugly thing. Grumping around, feeling his time is being wasted by the incompetent fools around him, feeling he hasn't been properly respected, enraged when others' ends conflict with his own. He should settle down, maybe try some empathy! But consider, instead, the angry sweetheart.

I see the "sweetheart" as the opposite of the jerk -- someone who is spontaneously and deeply attuned to the interests, values, and attitudes of other people, full of appreciation, happy to help, quick to believe that he rather than the other might be in the wrong, quick to apologize and in extreme cases sometimes being so attuned to others' perspectives that he risks losing track of his own interests, values, and attitudes. Spongebob Squarepants, Forrest Gump, sweet subordinate sitcom mothers from the 1950s and 1960s. These people don't feel enough anger. We should, I think, cheer their anger when it finally rises. We should let them relish their anger, the sense that they have been harmed and that the wrongdoer should pay them back.

I don't want sweethearts always to be bodhisattvas toward those who wrong them. Anger manifests the self-respect that they should claim, and it's part of the emotional range of experience that they might have too little of.

4. More.

Shoot, I've already gone on longer than intended, and I haven't got to all the comments by others that I'd wanted to address! Just quickly:

Some people suggested that eliminating anger might result in opening up other different ranges of emotions, in the right kind of sage. Interesting thought! I'd also add that there's a kind of between-person richness that I'd celebrate. If sages can eliminate anger as a great personal and moral accomplishment, I think that's wonderful. My concern is more with the ideal of a blanket extirpation as general advice.

Some people pointed out that the anger of the oppressed is particularly worth cultivating -- and that there may even be whole communities of oppressed people who feel too little anger. Yes!

Others wondered about whether I would favor adding brand-new unheard-of negative emotions just to improve our emotional range. This would make a fascinating speculative fiction thought experiment.

More later, I hope. In addition to the comments section at The Splintered Mind, the public Facebook conversation was lively and fun.

[image source]

Friday, March 10, 2017

Empathy, Anger, and the Richness of Life

I've been reading books that advise us to try to eliminate whole classes of moral emotions.

In Against Empathy, Paul Bloom describes empathy as the unhealthy "sugary soda" of morality, best purged from our diets. He argues that as a moral motivator, empathy is much more biased than rational compassion, and it can also motivate excessive aggression in revenge against the harming party. (See also Prinz 2011.)

In The Geography of Morals, Owen Flanagan recommends that we try to entirely extirpate anger from our lives, as suggested by some of the great Buddhist and Stoic sages. (See also Nussbaum 2016.)

Flanagan's and Bloom's cases against empathy and anger are mainly practical or instrumental (and not quite as absolute as their summary statements might sound). The costs of these emotions, they suggest, outweigh the benefits. As responses to suffering and injustice, it's simply that other emotional reactions are preferable, both personally and socially. Rational compassion, serenity, hope, thoughtful intervention, reconciliation, cool-headed justice, a helping hand.

I want to push back against the idea that we should narrow the emotional range of our lives by rejecting empathy and anger. My thought is this: Having a rich emotional range is intrinsically valuable.

One way of thinking about intrinsic value is to consider what you would wish for, if you knew that there was a planet on the far side of the galaxy, beyond any hope of contact with us. (I discuss this thought experiment here and here.) Would you hope that it's a sterile rock? A planet with microbial life but not multi-cellular organisms? A planet with the equivalent of deer and cows but no creatures of human-like intelligence? Human-like intelligences, but all lying comatose, plugged into simple happiness stimulators?

Here's what I'd hope for: a rich, complex, multi-layered society of loves and hates, victories and losses, art and philosophy, history, athletics, science, music, literature, feats of engineering, great achievements and great failures. When I think about a flourishing world, I want all of that. And negative emotions, destructive emotions, useless bad stuff, those are part of it. If I imagine a society with rational compassion, but no empathy, no anger -- a serene world populated exclusively by Buddhist and Stoic sages -- I have imagined a lesser world. I have imagined a step away from all the wonderful complexity and richness of life.

I don't know how to argue for this idea. I can only invite you to consider whether you share it. There would be something flat about a world without empathy or anger.

Whether individual lives without empathy or anger would be similarly flat is a different question. Maybe they wouldn't be -- especially in a world where extirpating such emotions is a rare achievement, adding to, rather than subtracting from, the diversity and complexity of our human forms of life. But interpreted as general advice, applicable to everyone, the advice to eliminate whole species of emotion is advice to uncolor the world.

Flanagan comes close to addressing my point when he considers what he calls the "attachment" objection to the extirpation of anger (esp. p. 202-203). The objector says that part of loving someone is being disposed to respond with anger if they are unjustly harmed. Flanagan acknowledges that a readiness to feel some emotions -- sorrow, for example -- might be necessary for loving attachment. But he denies that anger is among those necessary emotions. A person who lacks any disposition to anger can still love. Bloom says something analogous about empathy.

I'm not sure whether I'd say that one's love is flatter if one would never feel anger or empathy on behalf of one's beloved, but in any case my objection is simpler. It is that part of the glorious richness of life on Earth is our range of intense and varied emotions. To be against a whole class of emotions is to be against part of what makes the world the great and amazing whirlwind it is.

[image source]

Wednesday, March 01, 2017

Why Wide Reflective Equilibrium in Ethics Might Fail

"Reflective equilibrium" is sometimes treated as the method of ethics (Rawls 1971 is the classic source). In reflective equilibrium, one considers one's judgments, beliefs, or intuitions about particular individual cases (e.g., in such-and-such an emergency would it be bad to push someone in front of an oncoming trolley?). One then compares these judgments about cases with one's judgments about general principles (e.g., act to maximize total human happiness) and one's judgments about other related cases (e.g., in such-and-such a slightly different emergency, should one push the person?). Balance them all together, revising the general principles when that seems best in light of what you regard as secure judgments about the cases, and revising one's judgments about specific cases when that seems best in light of one's judgments about general principles and related cases. Repeat the process, tweaking your judgments about cases and principles until you reach an "equilibrium" in which your judgments about principles and a broad range of cases all fit together neatly. In "wide" equilibrium, you get to toss all other sources of knowledge into the process too -- scientific knowledge, reports of other people's judgments, knowledge about history, etc.

How could anything be more sensible than that?

I am inclined to agree that no approach is more sensible. It's the best way to do ethics. But, still, our best way of doing ethics might be irredeemably flawed.

The crucial problem is this: The process won't bust you out of a bad enough starting point if you're deeply enough committed to that starting point. And we might have bad starting points to which we are deeply committed.

Consider the Knobheads. This is a species of linguistic, rational, intelligent beings much like us, who live on a planet around a distant star. Babies are born without knobs on their foreheads, but knobs slowly grow starting at age five, and adults are very proud of their knobs. The knobs are simply bony structures, with no function other than what the Knobheads give them in virtue of their prizing of them. Sadly, 5% of children fail to start growing the knobs on their foreheads, despite being otherwise normal. Knobheads are so committed to the importance of the knobs, and the knobs are so central to their social structures, that they euthanize those children. Some Knobhead philosophers ask: Is it right to kill these knobless five-year-olds? They are aware of various ethical principles that suggest that they should not kill those children. And let's suppose that those ethical principles are in fact correct. The Knobheads should, really, ethically, let the knobless children live. But Knobheads are deeply enough committed to the judgment that killing those children is morally required that they are willing to tweak their judgments about general principles and other related cases. "It's just the essence of life as a Knobhead that one has a knob," some say. "It's too disruptive of our social practices to let them live. And if they live, they will consume resources and parental energy that could instead be given to proper Knobhead children." Etc.

Also consider the Hedons. The Hedons also are much like us and live on a far-away planet. When they think about "experience machine" cases or "hedonium" cases -- cases in which one sacrifices "higher goods" such as knowledge, social interaction, accomplishment, and art for the sake of maximizing pleasure -- they initially react somewhat like most Earthly humans do. That is, their first reaction is that it's better for people to live real, rich lives with risk and suffering than to zap their brains into a constant state of dumb orgasmic pleasure. But unlike most of us, the Hedons give up that judgment after engaging in reflective equilibrium. After considerable reflection, they are captured by the theoretical elegance of simple hedonistic act utilitarianism. As a society, they arrive at the consensus that the best ethical goal would be to destroy themselves as a species in order to transform their planet into a paradise of happy cows. Let's assume that, like the Knobheads, they are in fact ethically wrong to reach this conclusion. (Yes, I am assuming moral realism.)

It seems possible that wide reflective equilibrium, even ideally practiced, could fail the Knobheads and Hedons. All that needs to be the case is that they are too implacably committed to some judgments that really ought to change (the Knobheads) or that they are insufficiently committed to judgments that ought not to change (the Hedons). To succeed as a method, reflective equilibrium requires that our reflective starting points be approximately well-ordered in the sense that our stronger commitments are normally our better commitments. Otherwise, reflective tweaking might tend to move practitioners away fromrather than toward the moral truth.

Biological and cultural evolution, it seems, could easily give rise to groups of intelligent beings whose starting points are not well-ordered in that way and for whom, therefore, reflective equilibrium fails.

Of course, the crucial question is whether we are such beings. I worry that we might be.



How Robots and Monsters Might Break Human Moral Systems (Feb 3, 2015)

How Weird Minds Might Destabilize Human Ethics (Aug 15, 2015)

[image source]

Friday, February 24, 2017

Call for Papers: Introspection Sucks!

Centre for Philosophical Psychology and European Network for Sensory Research

Introspection sucks!

Conference with Eric Schwitzgebel, May 30, 2017, in Antwerp

This is a call for papers on any aspect of introspection (and not just papers critical of introspection, but also papers defending it)

There are no parallel sections. Only blinded submissions are accepted.

Length: 3000 words. Single spaced!

Deadline: March 30, 2017. Papers should be sent to

[from Brains Blog]

Thursday, February 23, 2017

Belief Is Not a Norm of Assertion (but Knowledge Might Be)

Many philosophers have argued that you should only assert what you know to be the case (e.g. Williamson 1996). If you don't know that P is true, you shouldn't go around saying that P is true. Furthermore, to assert what you don't know isn't just bad manners; it violates a constitutive norm, fundamental to what assertion is. To accept this view is to accept what's sometimes called the Knowledge Norm of Assertion.

Most philosophers also accept the view, standard in epistemology, that you cannot know something that you don't believe. Knowing that P implies believing that P. This is sometimes called the Entailment Thesis. From the Knowledge Norm of Assertion and the Entailment Thesis, the Belief Norm of Assertion follows: You shouldn't go around asserting what you don't believe. Asserting what you don't believe violates one of the fundamental rules of the practice of assertion.

However, I reject the Entailment Thesis. This leaves me room to accept the Knowledge Norm of Assertion while rejecting the Belief Norm of Assertion.

Here's a plausible case, I think.

Juliet the implicit racist. Many White people in academia profess that all races are of equal intelligence. Juliet is one such person, a White philosophy professor. She has studied the matter more than most: She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. When she considers the matter she feels entirely unambivalent. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the Black students never look bright. When a Black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a White or Asian student to do so, even though her Black students make insightful comments and submit excellent essays at the same rate as the others. This bias affects her grading and the way she guides class discussion. She is similarly biased against Black non-students. When Juliet is on the hiring committee for a new office manager, it won’t seem to her that the Black applicants are the most intellectually capable, even if they are; or if she does become convinced of the intelligence of a Black applicant, it will have taken more evidence than if the applicant had been White (adapted from Schwitzgebel 2010, p. 532).

Does Juliet believe that all the races are equally intelligent? On my walk-the-walk view of belief, Juliet is at best an in-between case -- not quite accurately describable as believing it, not quite accurately describable as failing to believe it. (Compare: someone who is extraverted in most ways but introverted in a few ways might be not quite accurately describable as an extravert nor quite accurately describable as failing to be an extravert.) Juliet judges the races to be equally intelligent, but that type of intellectual assent or affirmation is only one piece of what it is believe, and not the most important piece. More important is how you actually live your life, what you spontaneously assume, how you think and reason on the whole, including in your less reflective, unguarded moments. Imagine two Black students talking about Juliet behind her back: "For all her fine talk, she doesn't really believe that Black people are just as intelligent."

But I do think that Juliet can and should assert that all the races are intellectually equal. She has ample justification for believing it, and indeed I'd say she knows it to be the case. If Timothy utters some racist nonsense, Juliet violates no important norm of assertion if she corrects Timothy by saying, "No, the races are intellectually equal. Here's the evidence...."

Suppose Tim responds by saying something like, "Hey, I know you don't really or fully believe that. I've seen how to react to your Black students and others." Juliet can rightly answer: "Those details of my particular psychology are irrelevant to the question. It is still the case that all the races are intellectually equal." Juliet has failed to shape herself into someone who generally lives and thinks and reasons, on the whole, as someone who believes it, but this shouldn't compel her to silence or compel her to always add a self-undermining confessional qualification to such statements ("P, but admittedly I don't live that way myself"). If she wants, she can just baldly assert it without violating any norm constitutive of good assertion practice. Her assertion has not gone wrong in a way that an assertion goes wrong if it is false or unjustified or intentionally misleading.

Jennifer Lackey (2007) presents some related cases. One is her well-known creationist teacher case: a fourth-grade teacher who knows the good scientific evidence for human evolution and teaches it to her students, despite accepting the truth of creationism personally as a matter of religious faith. Lackey uses this case to argue against the Knowledge Norm of Assertion, as well as (in passing) against a Belief Norm of Assertion, in favor of a Reasonable-To-Believe Norm of Assertion.

I like the creationist teacher case, but it's importantly different from the case of Juliet. Juliet feels unambivalently committed to the truth of what she asserts; she feels no doubt; she confidently judges it to be so. Lackey's creationist teacher is not naturally read as unambivalently committed to the evolutionary theory she asserts. (Similarly for Lackey's other related examples.)

Also, in presenting the case, Lackey appears to commit to the Entailment Thesis (p. 598: "he does not believe, and hence does not know"). Although it is minority opinion in the field, I think it's not outrageous to suggest that both Juliet and the creationist teacher do know the truth of what they assert (cf. the geocentrist in Murray, Sytsma & Livengood 2013). If the creationist teacher knows but does not believe, then her case is not a counterexample to the Knowledge Norm of Assertion.

A related set of cases -- not quite the same, I think, and introducing further complications -- are ethicists who espouse ethical views without being much motivated to try to govern their own behavior accordingly.

[image from Helen De Cruz]

Wednesday, February 15, 2017

Human Nature Is Good: A Sketch of the Argument

The ancient Chinese philosopher Mengzi and the early modern French philosopher Rousseau both argued that human nature is good. The ancient Chinese philosopher Xunzi and the early modern English philosopher Hobbes argued that human nature is not good.

I interpret this as an empirical disagreement about human moral psychology. We can ask, who is closer to right?

1. Clarifying the Question.

First we need to clarify the question. What do Mengzi and Rousseau mean by the slogan that is normally translated into English as "human nature is good"? There are, I think, two main claims.

One is a claim about ordinary moral reactions: Normal people, if they haven't been too corrupted by a bad environment, will tend to be revolted by clear cases of morally bad behavior and pleased by clear cases of morally good behavior.

The other is a claim about moral development: If people reflect carefully on those reactions, their moral understanding will mature, and they will find themselves increasingly wanting to do what's morally right.

The contrasting view -- the Xunzi/Hobbes view -- is that morality is an artificial human construction. Unless the right moral system has specifically been inculcated in them, ordinary people will not normally find themselves revolved by evil and pleased by the good. At least to start, people need to be told what is right and wrong by others who are wiser than them. There is no innate moral compass to get you started in the right developmental direction.

2. Mixed Evidence?

One might think the truth is somewhere in the middle.

On the side of good: Anyone who suddenly sees a child crawling toward a well, about to fall in, would have an impulse to save the child, suggesting that everyone has some basic, non-selfish concern for the welfare of others, even without specific training (Mengzi 2A6). This concern appears to be present early in development. For example, even very young children show spontaneous compassion toward those who are hurt. Also, people of different origins and upbringings admire moral heroes who make sacrifices for the greater good, even when they aren't themselves directly benefited. Non-human primates show sympathy for each other and seem to understand the basics of reciprocity, exchange, and rule-following, suggesting that such norms aren't entirely a human invention. (On non-human primates, see especially Frans de Waal's 1996 book Good Natured.)

On the other hand: Toddlers (and adults!) can of course be selfish and greedy; they don't like to share or to wait their turn. In the southern U.S. about a century ago, crowds of ordinary White people frequently lynched Blacks for minor or invented offenses, proudly taking pictures and inviting their children along, without apparently seeing anything wrong in it. (See especially James Allen et al., Without Sanctuary.) The great "heroes" of the past include not only those who sacrificed for the greater good but also people famous mainly for conquest and genocide. We still barely seem to notice the horribleness of naming our boys "Alexander" and "Joshua".

3. Human Nature Is Nonetheless Good.

Some cases can be handled by emphasizing that only "normal" people who haven't been too corrupted by a bad environment will be attracted to good and revolted by evil. But a better general defense of the goodness of human nature involves adopting an idea that runs through both the Confucian and Buddhist traditions and, in the West, from Socrates through the Enlightenment to Habermas and Scanlon. It's this: If you stop and think, in an epistemically responsible way (perhaps especially in dialogue with others), you will tend to find yourself drawn toward what's morally good and repelled by what's evil.

Example A. Extreme ingroup bias. For example, one of the primary sources of evil that doesn't feel like evil -- and can in fact feel like doing something morally good -- is ingroup/outgroup thinking. Early 20th century Southern Whites saw Blacks as an outgroup, a "them" that needed to be controlled; the Nazis similarly viewed the Jews as alien; in celebrating wars of conquest, the suffering of the conquered group is either disregarded or treated as much less important that the benefits to the conquering group. Ingroup/outgroup thinking of this sort typically requires either ignoring others' suffering or accepting dubious theories that can't withstand objective scrutiny. (This is one function of propaganda.) The type of extreme ingroup bias that promotes evil behavior tends to be undermined by epistemically responsible reflection.

Example B. Selfishness and jerkitude. Similarly, selfish or jerkish behavior tends to be supported by rationalizations and excuses that prove flimsy when carefully examined. ("It's fine for me to cheat on the test because of X", "Our interns ought to expect to be hassled and harrassed; it's just part of their job", etc.) If you were simply comfortable being selfish, you wouldn't need to concoct those poor justifications. If and when critical reflection finally reveals the flimsiness of those justifications, that normally creates some psychological pressure for you to change.

It's crucial not to overstate this point. We can be unshakable in our biases and rationalizations despite overwhelming evidence. And even when we do come to realize that something we eagerly want for ourselves or our group is immoral, we can still choose that thing. Evil might still be commonplace: Just as most plants don't survive to maturity, many people fall far short of their moral potential, often due to hard circumstances or negative outside influences.

Still, if we think well enough, we all can see the basic outlines of moral right and wrong; and something in us doesn't like to choose the wrong. This is true of pretty much everyone who isn't seriously socially deprived, regardless of the specifics of their cultural training. Furthermore, this inclination toward what's good -- I hope and believe -- is powerful enough to place at the center of moral education.

That is the empirical content of the claim that human nature is good.

I do have some qualms and hesitations, and I think it only works to a certain extent and within certain limits.

Perhaps oddly, the strikingly poor quality of the reasoning in recent U.S. politics has actually firmed up my opinion that careful reflection can indeed fairly easily reveal the lies behind evil.


Related: Human Nature and Moral Education in Mencius, Xunzi, Hobbes, and Rousseau (History of Philosophy Quarterly 2007).

[image source]

Monday, February 06, 2017

Should Ethics Professors Be Held to Higher Ethical Standards in Their Personal Behavior?

I've been waffling about this for years (e.g., here and here). Today, I'll try out a multi-dimensional answer.

1. My first thought is that it would be unfair for us to hold ethics professors to higher standards of personal behavior because of their career choice. Ethics professors are hired based on their academic skills as philosophers -- their ability to interpret texts, evaluate arguments, and write and teach effectively about a topic of philosophical discourse. If we demand that they also behave according to higher ethical standards than other professors, we put an additional burden on them that they don't deserve and isn't written into their work contracts. They signed up to be scholars, not moral exemplars. (In this way, ethics professors differ from clergy, whose role is partly that of exemplar.)

2. Nonetheless, it might be reasonable for ethicists to hold themselves to higher moral standards. Consider my "cheeseburger ethicist" thought experiment. An ethicist reads Peter Singer on vegetarianism, considers the available counterarguments, and ultimately concludes that Singer is correct. Eating meat is seriously morally wrong, and we ought to stop. She publishes a couple of articles, and she teaches the arguments to her classes. But she just keeps eating meat at the same rate she always did, with no effort to change her ways. If challenged by a surprised student, maybe she defends herself with something like Thought 1 above: "I'm just paid to evaluate the arguments. Don't demand that I also live that way. I'm off duty!"

[Socrates: always on duty.]

There's something strange and disappointing, I think, about a response that depends on treating the study of ethics as just another job. Our cheeseburger ethicist knows a large range of literature, and she has given the matter extensive thought. If she insulates her philosophical thinking entirely from her personal behavior, she seems to be casting away a major resource for moral self-improvement. All of us, even if we don't aim to be saints, ought to take some advantage of the resources we have that can help us to be better people -- whether those resources are community, church, meditation, thoughtful reading, or the advice of friends we know to be wise. As I've imagined her, the cheeseburger ethicist shows a disconcerting lack of interest in becoming a better person.

We can run similar examples with political activism, charitable giving, environmentalism, sexual ethics, honesty, kindness, racism and sexism, etc. -- any issue with practical implications for one's life, to which an ethicist might give serious thought, leading to what she takes to be a discovery that she would be much morally better if she started doing X. Almost all ethicists have thought seriously about some issues with practical implications for their lives.

Combining 1 and 2. Despite the considerations of fairness raised in point 1, I think we can reasonably expect ethicists to shape and improve their personal behavior in a way that is informed by their professional ethical reasoning. This is not because ethicists have a special burden as exemplars but rather because it's reasonable to expect everyone to use the tools at their disposal toward moral self-improvement, at least to some moderate degree, or at least toward the avoidance of serious moral wrongdoing. We should similarly expect people who regularly attend religious services to try to use, rather than ignore, what they regard as the best moral insights of their religion. We should also expect secular non-ethicists to explore and improve their moral worldviews, in some way that suits their abilities and life circumstances, and apply some of the results.

3. My third thought is to be cautious with charges of hypocrisy. Part of the philosopher's job is to challenge widely held assumptions. This can mean embracing unusual or radical views, if that's where the arguments seem to lead. If we expect high consistency between a professional ethicist's espoused positions and her real-world choices, then we disincentivize highly demanding or self-sacrificial conclusions. But it seems, epistemically, like a good thing if professional ethicists have the liberty to consider, on their argumentative merits alone, the strength of the arguments for highly demanding ethical conclusions (e.g., the relatively wealthy should give most of their money to charity, or if you are attacked you should "turn the other cheek") alongside the arguments for much less demanding ethical conclusions (e.g., there's no obligation to give to charity, revenge against wrongdoing is just fine). If our ethicist knows that as soon as she reaches a demanding moral conclusion she risks charges of hypocrisy, then our ethicist might understandably be tempted to draw the more lenient conclusion instead. If we demand ethicists to live according to the norms they endorse, we effectively pressure them to favor lenient moral systems compatible with their existing lifestyles.

(ETA: Based on personal experience, and my sense of the sociology of the field, and one empirical study, it does seem that professional reflection on ethical issues, in contemporary Anglophone academia, coincides with a tendency to embrace more stringent moral norms and to see our lives as permeated with moral choices.)

4. And yet there's a complementary epistemic cost to insulating one's philosophical positions too much from one's life. To gain insight into an ethical position, especially a demanding one, it helps to try to live that way. When Gandhi and Martin Luther King Jr. talk about peaceful resistance, we rightly expect them to have some real understanding, since they have tried to put it to work. Similarly for Christian compassion, Buddhist detachment, strict Kantian honesty, or even egoistic hedonism: We ought to expect people who have attempted to put these things into practice to have, on average, a richer understanding of the issues than those who have not. If an ethicist aspires to write and teach about a topic, it seems almost intellectually irresponsible for them not to try to gain direct personal experience if they can.

(ETA 2: Also, to understand vice, it's probably useful to try it out! Or better, to have lived through it in the past.)

Combining 1, 2, 3, and 4. I don't think all of this fits neatly together. The four considerations are to some extent competing. Should we hold ethics professors to higher ethical standards? Should we expect them to live according to the moral opinions they espouse? Neither "yes" nor "no" does justice to the complexity of the issue.

At least, that's where I'm stuck today. I guess "multi-dimensional" is a polite word for "still confused and waffling".

[image source]

Friday, February 03, 2017

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

New essay in draft:

The Unskilled Zhuangzi: Big and Useless and Not So Good at Catching Rats

Abstract: The mainstream tradition in recent Anglophone Zhuangzi interpretation treats spontaneous skillful responsiveness -- similar to the spontaneous responsiveness of a skilled artisan, athlete, or musician -- as a, or the, Zhuangzian ideal. However, this interpretation is poorly grounded in the Inner Chapters. On the contrary, in the Inner Chapters, this sort of skillfulness is at least as commonly criticized as celebrated. Even the famous passage about the ox-carving cook might be interpreted more as a celebration of the knife’s passivity than as a celebration of the cook’s skillfulness.


This is a short essay at only 3500 words (about 10 double-spaced pages excluding abstract and references) -- just in and out with the textual evidence. Skill-centered interpretations of Zhuangzi are so widely accepted (e.g., despite important differences, Graham, Hansen, and Ivanhoe), that people interested in Zhuangzi might find it interesting to see the contrarian case.

Available here.

As always, comments welcome either by email or in the comments section of this post. (I'd be especially interested in references to other scholars with a similar anti-skill reading, whom I may have missed.)

[image source]

Monday, January 30, 2017

David Livingstone Smith: The Politics of Salvation: Ideology, Propaganda, and Race in Trump's America

David Livingstone Smith's talk at UC Riverside, Jan 19, 2017:

Introduction by Milagros Pena, Dean of UCR's College of Humanities, Arts, and Social Sciences. Panel discussants are Jennifer Merolla (Political Science, UCR), Armando Navarro (Ethnic Studies, UCR), and me. After the Dean's remarks, David's talk is about 45 minutes, then about 5-10 minutes for each discussant, then open discusson with the audience for the remainder of the three hours, moderated by David Glidden (Philosophy, UCR).

Smith outlines Roger Money-Kyrle's theory of propaganda -- drawn from observing Hitler's speeches. On Money-Kyrle's view propaganda involves three stages: (1) induce depression, (2) induce paranoia, and (3) offer salvation. Smith argues that Trump's speeches follow this same pattern.

Smith also argues for a "teleofunctional" notion of ideological beliefs as beliefs that have the function of promoting oppression in the sense that those beliefs have proliferated because they promote oppression. On this view, beliefs are ideological, or not, depending on their social or cultural lineage. One's own personal reasons for adopting those beliefs are irrelevant to the question of whether they are ideological. In the case of Trump in particular, Smith argues, regardless of why he embraces the beliefs he does, or what his personal motives are, if his beliefs are beliefs with the cultural-historical function of promoting oppression, they are ideological.

Friday, January 27, 2017

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Yesterday Stephen Bannon, one of Trump's closest advisors, called the media "the opposition party". My op-ed piece in today's Los Angeles Times is my response to that type of thinking.

What Happens to Democracy When the Experts Can't Be Both Factual and Balanced?

Does democracy require journalists and educators to strive for political balance? I’m hardly alone in thinking the answer is "yes." But it also requires them to present the facts as they understand them — and when it is not possible to be factual and balanced at the same time, democratic institutions risk collapse.

Consider the problem abstractly. Democracy X is dominated by two parties, Y and Z. Party Y is committed to the truth of propositions A, B and C, while Party Z is committed to the falsity of A, B and C. Slowly the evidence mounts: A, B and C look very likely to be false. Observers in the media and experts in the education system begin to see this, but the evidence isn’t quite plain enough for non-experts, especially if those non-experts are aligned with Party Y and already committed to A, B and C....

[continued here]

Wednesday, January 25, 2017

Fiction Writing Workshop for Philosophers in Oxford, June 1-2

... the deadline for application is Feb. 1.

It's being run by the ever-awesome Helen De Cruz, supported by the British Society of Aesthetics. The speakers/mentors will be James Hawes, Sara L. Uckelman, and me.

More details here.

If you're at all interested, I hope you will apply!

Tuesday, January 24, 2017

The Philosopher's Rationalization-O-Meter

Usually when someone disagrees with me about a philosophical issue, I think they're about 20% correct. Once in a while, I think a comment is just straightforwardly wrong. Very rarely, I find myself convinced that the person who disagrees is correct and my original view was mistaken. But for the most part, it's a remarkable consistency: The critic has a piece of the truth, but I have more of it.

My inner skeptic finds this to be a highly suspicious state of affairs.

Let me clarify what I mean by "about 20% correct". I mean this: There's some merit in what the disagreeing person says, but on the whole my view is still closer to correct. Maybe there's some nuance that they're noticing, which I elided, but which doesn't undermine the big picture. Or maybe I wasn't careful or clear about some subsidiary point. Or maybe there's a plausible argument on the other side which isn't decisively refutable but which also isn't the best conclusion to draw from the full range of evidence holistically considered. Or maybe they've made a nice counterpoint which I hadn't previously considered but to which I have an excellent rejoinder available.

In contrast, for me to think that someone who disagrees with me is "mostly correct", I would have to be convinced that my initial view was probably mistaken. For example, if I argued that we ought to expect superintelligent AI to be phenomenally conscious, the critic ought to convince me that I was probably mistaken to assert that. Or if I argue that indifference is a type of racism, the critic ought to convince me that it's probably better to restrict the idea of "racism" to more active forms of prejudice.

From an abstract point of view, how often ought I expect to be convinced by those who object to my arguments, if I were admirably open-minded and rational?

For two reasons, the number should be below 50%:

1. For most of the issues I write about, I have given the matter more thought than most (not all!) of those who disagree with me. Mostly I write about issues that I have been considering for a long time or that are closely related to issues I've been considering for a long time.

2. Some (most?) philosophical disputes are such that even ideally good reasoners, fully informed of the relevant evidence, might persistently disagree without thereby being irrational. People might reasonably have different starting points or foundational assumptions that justify persisting disagreement.

Still, even taking 1 and 2 together, it seems that it should not be a rarity for a critic to raise an interesting, novel objection that I hadn't previously considered and which ought to persuade me. This is clear when I consider other philosophers: Often they get objections (sometimes from me) which, in my judgment, nicely illuminate what is incorrect in their views, and which should rationally lead them to change their views -- if only they weren't so defensively set upon rebutting all critiques! I doubt I am a much better philosopher than they are, wise enough to have wholly excellent opinions; so I must sometimes hear criticisms that ought to cause me to relinquish my views.

Let me venture to put some numbers on this.

Let's begin by excluding positions on which I have published at least one full-length paper. For those positions, considerations 1 and 2 plausibly suggest rational steadfastness in the large majority of cases.

A more revealing target is half-baked or three-quarters-baked positions on contentious issues: anything from a position I have expressed verbally, after a bit of thought, in a seminar or informal discussion, up to approximately a blog post, if the issue is fairly new to me.

Suppose that about 20% of the time what I say is off-base in a way that should be discoverable to me if I gave it more thought, in an reasonably open-minded, even-handed way. Now if I'm defending that off-base position in dialogue with someone substantially more expert than I, or with a couple of peers, or with a somewhat larger group of people who are less expert than I but still thoughtful and informed, maybe I should expect that about half to 3/4 of the time I'll hear an objection that ought to move me. Multiplying and rounding, let's say that about 1/8 of the time, when I put forward a half- or three-quarters-baked idea to some interlocutors, I ought to hear an objection that makes me think, whoops, I guess I'm probably mistaken!

I hope this isn't too horrible an estimate, at least for a mature philosopher. For someone still maturing as a philosopher, the estimate should presumably be higher -- maybe 1/4. The estimate should similarly be higher if the half- or three-quarters-baked idea is a critique of someone more expert than you, concerning the topic of their philosophical expertise (e.g., pushing back against a Kant expert's interpretation of a passage of Kant that you're interested in).

Here then are two opposed epistemic vices: being too deferential or being too stubborn. The cartoon of excessive deferentiality would be the person who instantly withdraws in the face of criticism, too quickly allowing that they are probably mistaken. Students are sometimes like this, but it's hard for a really deferential person to make it far as a professional philosopher in U.S. academic culture. The cartoon of excessive stubbornness is the person who is always ready to cook up some post-hoc rationalization of whatever half-baked position happens to come out of their mouth, always fighting back, never yielding, never seeing any merit in any criticisms of their views, however wrong their views plainly are. This is perhaps the more common vice in professional philosophy in the U.S., though of course no one is quite as bad as the cartoon.

Here's a third, more subtle epistemic vice: always giving the same amount of deference. Cartoon version: For any criticism you hear, you think there's 20% truth in it (so you're partly deferential) but you never think there's more than 20% truth in it (so you're mostly stubborn). This is what my inner skeptic was worried about at the beginning of this post. I might be too close to this cartoon, always a little deferential but mostly stubborn, without sufficient sensitivity to the quality of the particular criticism being directed at me.

We can now construct a rationalization-o-meter. Stubborn rationalization, in a mature philosopher, is revealed by not thinking your critics are right, and you are wrong, at least 1/8 of the time, when you're putting forward half- to three-quarters-baked ideas. If you stand firm in 15 out of 16 cases, then you're either unusually wise in your half-baked thoughts, or you're at .5 on the rationalization-o-meter (50% of the time that you should yield you offer post-hoc rationalizations instead). If you're still maturing or if you're critiquing an expert on their own turf, the meter should read correspondingly higher, e.g., with a normative target of thinking you were demonstrably off-base 1/4 or even half the time.

Insensitivity is revealed by having too little variation in how much truth you find in critics' remarks. I'd try to build an insensitivity-o-meter, but I'm sure you all will raise somewhat legitimate but non-decisive concerns against it.

[image modified from source]