Thursday, May 25, 2017

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

At The Deviant Philosopher Wayne Riggs, Amy Olberding, Kelly Epley, and Seth Robertson are collecting suggestions for teaching units, exercises, and primers that incorporate philosophical approaches and philosophers that are not currently well-represented in the formal institutional structures of the discipline. The idea is to help philosophers who want suggestions for diversifying their curriculum. It looks like a useful resource!

I contributed the following to their site, and I hope that others who are interested in diversifying the philosophical curriculum will also contribute something to their project.

Lynching, the Milgram Experiments, and the Question of Whether "Human Nature Is Good"

Primary Texts

  • Allen, James, Hilton Als, John Lewis, and Leon F. Litwack (2000). Without sanctuary: Lynching photography in America. Santa Fe: Twin Palms. Pp. 8-16, 173-176, 178-180, 184-185, 187-190, 194-196, 198, 201 (text only), and plates #20, 25, 31, 37-38, 54, 57, 62-65, 74, and 97.
  • Wells-Barnett, Ida B. (1892/2002). On lynchings. Ed. P.H. Collins. Amherst, NY: Humanity. Pp. 42-46.
  • Mengzi (3rd c. BCE/1970). Trans. B.W. Van Norden. Indianapolis: Hackett. 1A7, 1B5, 1B11, 2A2 (p. 35-41 only), 2A6, 2B9, 3A5, 4B12, 6A1 through 6A15, 6B1, 7A7, 7A15, 7A21, 7B24, 7B31.
  • Rousseau, Jean-Jacques (1755/1995). Discourse on the origin of inequality. Trans. F Philip. Ed. P. Coleman. Oxford: Oxford. Pp. 45-48.
  • Xunzi (3rd c. BCE/2014). Xunzi: The complete text. Trans. E. Hutton. Princeton, NJ: Princeton. Pp. 1-8, 248-257.
  • Hobbes, Thomas (1651/1996). Leviathan. Ed. R. Tuck. Cambridge: Cambridge. Pp. 86-90.
  • Doris, John M. (2002). Lack of character. Cambridge: Cambridge. Pp. 28-61.
  • The Milgram video on Obedience to Authority.
Secondary Texts for Instructor
  • Dray, Philip (2002). At the hands of persons unknown. New York: Modern Library.
  • Ivanhoe, Philip J. (2000). Confucian moral self cultivation, 2nd ed. Indianapolis: Hackett. 
  • Schwitzgebel, Eric (2007). Human nature and moral education in Mencius, Xunzi, Hobbes, and Rousseau. History of Philosophy Quarterly, 24, 147-168.
Suggested Courses
  • Introduction to Ethics
  • Ethics
  • Introduction to Philosophy
  • Evil
  • Philosophy of Psychology
  • Political Philosophy
Overview

This is a two-week unit. Day one is on the history of lynching in the United States, featuring lynching photography and Ida B. Wells. Day two is Mengzi on human nature (with Rousseau as secondary reading). Day three is Xunzi on human nature (with Hobbes as secondary reading). Days four and five are the Milgram video and John Doris on situationism.

The central question concerns the psychology of lynching perpetrators and Milgram participants. On a “human nature is good” view, we all have some natural sympathies or an innate moral compass that would be revolted by our participation in such activities, if we were not somehow swept along by bad influences (Mengzi, Rousseau). On a “human nature is bad” view, our natural inclinations are mostly self-serving and morality is an artificial human construction; so if one’s culture says “this is the thing to do” there is no inner source of resistance unless you have already been properly trained (Xunzi, Hobbes). Situationism (which is not inconsistent with either of these alternatives) suggests that most people can commit great evil or good depending on what seem to be fairly moderate situational pressures (Doris, Milgram).

Students should be alerted in advance about the possibly upsetting photographs, and they must be encouraged to look closely at the faces of the perpetrators rather than being too focused on the bodies of the victims (which may be edited out if desired for classroom presentation). You might even consider giving the students possible alternative readings if they find the lynching material too difficult (such as an uplifting chapter from Colby & Damon 1992).

On Day One, a point of emphasis should be that most of the victims were not even accused of capital crimes, and focus can be both on the history of lynching in general and on the emotional reactions of the perpetrators as revealed by their behavior described in the texts and by their faces in the photos.

On Day Two, the main emphasis should be on Mengzi’s view that human nature is good. King Xuan and the ox (1A7), the child at the well (2A6), and the beggar refusing food insultingly given (6A10) are the most vivid examples. The metaphor of cultivating sprouts is well worth extended attention (as discussed in the Ivanhoe and Schwitzgebel readings for the instructor). If the lynchers had paused to reflect in the right way, would they have found in themselves a natural revulsion against what they were doing, as Mengzi would predict? Rousseau’s view is similar (especially as developed in Emile) but puts more emphasis on the capacity of philosophical thinking to produce rationalizations of bad behavior.

On Day Three, the main emphasis should be on Xunzi’s view that human nature is bad. His metaphor of straightening a board is fruitfully contrasted with Mengzi’s of cultivating sprouts. For example, in straightening a board, the shape (the moral structure) is imposed by force from outside. In cultivating a sprout, the shape grows naturally from within, given a supportive, nutritive, non-damaging environment. Students can be invited to consider cartoon versions of “conservative” moral education (“here are the rules, like it or not, follow them or you’ll be punished!”) versus “liberal” moral education (“don’t you feel bad that you hurt Ana’s feelings?”).

Day Four you might just show the Milgram video.

Day Five the focus should be on articulating situationism vs dispositionism (or whatever you want to call the view that broad, stable, enduring character traits explain most of our moral behavior). I recommend highlighting the elements of truth in both views, and then showing how there are both situationist and dispositionist elements in both Mengzi and Xunzi (e.g., Mengzi says that young men are mostly cruel in times of famine, but he also recommends cultivating stable dispositions). Students can be encouraged to discuss how well or poorly the three different types of approach explain the lynchings and the Milgram results

If desired, Day Six and beyond can cover material on the Holocaust. Hannah Arendt’s Eichmann in Jerusalem and Daniel Goldhagen’s Hitler’s Willing Executioners make a good contrast (with Mengzian elements in Arendt and Xunzian elements in Goldhagen). (If you do use Goldhagen, be sure you are aware of the legitimate criticisms of some aspects of his view by Browning and others.)

Discussion Questions
  • What emotions are the lynchers feeling in the photographs?
  • If the lynchers had stopped to reflect on their actions, would they have been able to realize that what they were doing was morally wrong?
  • Mengzi lived in a time of great chaos and evil. Although he thought human nature was good, he never denied that people actually commit great evil. What resources are available in his view to explain actions like those of the lynch mobs, or other types of evil actions?
  • Is morality an artificial cultural invention? Or do we all have natural moral tendencies that only need to be cultivated in a nurturing environment?
  • In elementary school moral education, is it better to focus on enforcing rules that might not initially make sense to the children, or is it better to try to appeal to their sympathies and concerns for other people?
  • How effectively do you think people can predict what they themselves would do in a situation like the Milgram experiment or a lynch mob?
  • Are there people who are morally steadfast enough to resist even strong situational pressures? If so, how do they become like that?
Activities (optional)

On the first day, an in class assignment might be for them to spend 5-7 minutes writing down their opinion on whether human nature is good or evil (or in-between, or alternatively that the question doesn’t even make sense as formulated). Then can then trade their written notes with a neighbor or two and compare answers. On the last day, they can review what they wrote on the first day and discuss whether their opinions have changed.
[Greetings from Graz, Austria, by the way!]

Friday, May 19, 2017

Pre-Excuse

I'm heading off to Europe tomorrow for a series of talks and workshops. Nijmegen, Vienna, Graz, Lille, Leuven, Antwerp, Oxford, Cambridge -- whee! Then back to Riverside for a week and off to Iceland with the family to celebrate my son's high school graduation. Whee again! I return to sanity July 5.

I've sketched out a few ideas for blog posts, but nothing polished.

If I descend into incoherence, I have my pre-excuse ready! Jetlag and hotel insomnia.

[image source]

Thursday, May 18, 2017

Hint, Confirm, Remind

You can't say anything only once -- not when you're writing, not if you want the reader to remember. People won't read the words exactly as you intend them, or they will breeze over them; and often your words will admit of more interpretations than you realize, which you rule out by clarifying, angling in, repeating, filling out with examples, adding qualifiers, showing how what you say is different from some other thing it might be mistaken for.

I have long known this about academic writing. Some undergraduates struggle to fill their 1500-word papers because they think that every idea gets one sentence. How do you have eighty ideas?! It becomes much easier to fill the pages -- indeed the challenge shifts from filling the pages to staying concise -- once you recognize that every idea in an academic paper deserves a full academic-sized paragraph. Throw in an intro and conclusion and you've got, what, five ideas in a 1500-word paper? Background, a main point, one elaboration or application, one objection, a response -- done.

It took a while for me to learn that this is also true in writing fiction. You can't just say something once. My first stories were too dense. (They are now either trunked or substantially expanded.) I guess I implicitly figured that you say something, maybe in a clever oblique way, the reader gets it, and you're done with that thing. Who wants boring repetition and didacticism in fiction?

Without being didactically tiresome, there are lots of ways to slow things down so that the reader can relish your idea, your plot turn, your character's emotion or reaction, rather than having the thing over and done in a sentence. You can break it into phases; you can explicitly set it up, then deliver; you can repeat in different words (especially if the phrasings are lovely); you can show different aspects of the scene, relevant sensory detail, inner monologue, other characters' reactions, a symbolic event in the environment.

But one of my favorite techniques is hint, confirm, remind. You can do this in a compact way (as in the example I'm about to give), but writers more commonly spread HCRs throughout the story. Some early detail hints or foreshadows -- gives the reader a basis for guessing. Then later, when you hit it directly, the earlier hint is remembered (or if not, no biggie, not all readers are super careful), and the alert reader will enjoy seeing how the pieces come together. Still later, you remind the reader -- more quickly, like a final little hammer tap (and also so that the least alert readers finally get it).

Neil Gaiman is a master of the art. As I was preparing some thoughts for a fiction-writing workshop for philosophers I'm co-leading next month, I noticed this passage about "imposter syndrome", recently going around. Here's Gaiman:

Some years ago, I was lucky enough invited to a gathering of great and good people: artists and scientists, writers and discoverers of things. And I felt that at any moment they would realise that I didn’t qualify to be there, among these people who had really done things.

On my second or third night there, I was standing at the back of the hall, while a musical entertainment happened, and I started talking to a very nice, polite, elderly gentleman about several things, including our shared first name. And then he pointed to the hall of people, and said words to the effect of, "I just look at all these people, and I think, what the heck am I doing here? They’ve made amazing things. I just went where I was sent."

And I said, "Yes. But you were the first man on the moon. I think that counts for something."

And I felt a bit better. Because if Neil Armstrong felt like an imposter, maybe everyone did.

Hint: an elderly gentleman, same first name as Gaiman, famous enough to be backstage among well known artists and scientists. Went where he was sent.

Confirm: "You were the first man on the moon".

Remind: "... if Neil Armstrong..."

The hints set up the puzzle. It's unfolding fast before you, if you're reading at a normal pace. You could slow way down and treat it as a riddle, but few of us would do that.

The confirm gives you the answer. Now it all fits together. Bonus points to Gaiman for making it natural dialogue rather than flat-footed exposition.

The remind here is too soon after the confirm to really be a reminder, as it would be if it appeared a couple of pages later in a longer piece of writing. But the basic structure is the same: The remind hammer-taps the thing that should already be obvious, to make sure the reader really has it -- but quickly, with a light touch.

If you want the reader to remember, you can't just say it only once.

[image source]

Thursday, May 11, 2017

The Sucky and the Awesome

Here are some things that "suck":

  • bad sports teams;
  • bad popular music groups;
  • getting a flat tire, which you try to change in the rain because you're late to catch a plane for that vacation trip you've been planning all year, but the replacement tire is also flat, and you get covered in mud, miss the plane, miss the vacation, and catch a cold;
  • me, at playing Sonic the Hedgehog.
  • It's tempting to say that all bad things "suck". There probably is a legitimate usage of the term on which you can say of anything bad that it sucks; and yet I'm inclined to think that this broad usage is an extension from a narrower range of cases that are more central to the term's meaning.

    Here are some bad things that it doesn't seem quite as natural to describe as sucking:

  • a broken leg (though it might suck to break your leg and be laid up at home in pain);
  • lying about important things (though it might suck to have a boyfriend/girlfriend who regularly lies);
  • inferring not-Q from (i) P implies Q and (ii) not-P (though you might suck at logic problems);
  • the Holocaust.
  • The most paradigmatic examples of suckiness combine aesthetic failure with failure of skill or functioning. The sports team or the rock band, instead of showing awesome skill and thereby creating an awesome audience experience of musical or athletic splendor, can be counted on to drop the ball, hit the wrong note, make a jaw-droppingly stupid pass, choose a trite chord and tacky lyric. Things that happen to you can suck in a similar way to the way it sucks to be stuck at a truly horrible concert: Instead of having the awesome experience you might have hoped for, you have a lousy experience (getting splashed while trying to fix your tire, then missing your plane). There's a sense of waste, lost opportunity, distaste, displeasure, and things going badly. You're forced to experience one stupid, rotten thing after the next.

    Something sucks if (and only if) it should deliver good, worthwhile experiences or results, but it doesn't, instead wasting people's time, effort, and resources in an unpleasant and aesthetically distasteful way.

    The opposite of sucking is being awesome. Notice the etymological idea of "awe" in the "awesome": Something is awesome if it does or should produce awe and wonder at its greatness -- its great beauty, its great skill, the way everything fits elegantly together. The most truly sucky of sucky things instead, produces wonder at its badness. Wow, how could something be that pointless and awful! It's amazing!

    That "sucking" focuses our attention on the aesthetic and experiential is what makes it sound not quite right to say that the Holocaust sucked. In a sense, of course, the Holocaust did suck. But the phrasing trivializes it -- as though what is most worth comment is not the moral horror and the millions of deaths but rather the unpleasant experiences it produced.

    Similarly for other non-sucky bad things. What's central to their badness isn't aesthetic or experiential. To find nearby things that more paradigmatically suck, you have to shift to the experiential or to a lack of (awesome) skill or functioning.

    All of this is very important to understand as a philosopher, of course, because... because...

    Well, look. We wouldn't be using the word "sucks" so much if it wasn't important to us whether or not things suck, right? Why is it so important? What does it say about us, that we think so much in terms of what sucks and what is awesome?

    Here's a Google Ngram of "that sucks, this sucks, that's awesome". Notice the sharp rise that starts in the mid-1980s and appears to be continuing through the end of the available data.

    [click to enlarge]

    We seem to be more inclined than ever to divide the world into the sucky and the awesome.

    To see the world through the lens of sucking and awesomeness is to evaluate the world as one would evaluate a music video: in terms of its ability to entertain, and generate positive experiences, and wow with its beauty, magnificence, and amazing displays of skill.

    It's to think like Beavis and Butthead, or like the characters in the Lego Movie.

    That sounds like a superficial perspective on the world, but there's also something glorious about it. It's glorious that we have come so far -- that our lives are so secure that we expect them to be full of positive aesthetic experiences and maestro performances, so that we can dismissively say "that sucks!" when those high expectations aren't met.

    --------------------------------------

    For a quite different (but still awesome!) analysis of the sucky and the awesome, check out Nick Riggle's essay "How Being Awesome Became the Great Imperative of Our Time".

    Many thanks to my Facebook friends and followers for the awesome comments and examples on my public post about this last week.

    Wednesday, May 03, 2017

    On Trump's Restraint and Good Judgment (I Hope)

    Yesterday afternoon, I worked up the nerve to say the following to a room full of (mostly) white retirees in my politically middle-of-the-road home town of Riverside, California.

    (I said this after giving a slightly trimmed version of my Jan 29 L.A. Times op-ed What Happens to Democracy If the Experts Can't Be Both Factual and Balanced.)

    Our democracy requires substantial restraints on the power of the chief executive. The president cannot simply do whatever he wants. That's dictatorship.

    Dictatorship has arrived when other branches of government -- the legislature and the judiciary -- are unable to thwart the president. This can happen either because the other branches are populated with stooges or because the other branches reliably fail in their attempts to resist the president.

    President Trump appears to have expressed admiration for undemocratic chief executives who seize power away from judiciaries and legislatures.

    Here's something that could occur. President Trump might instruct the security apparatus of the United States -- the military, the border patrol, police departments -- to do something, for example to imprison or deport groups of people he describes as a threat. And then a judge or a group of judges might decide that Trump's instructions should not be implemented. And Trump might persist rather than deferring. He might insist that the judge or judges who aim to block him are misinterpreting or misusing the law. He might demand that his orders be implemented despite the judicial outcome.

    Here's one reason to think that won't occur: In January, Trump issued an executive order banning travel from seven majority-Muslim countries. When judges decided to block the order, Trump backed down. He insulted the judges and derided the decision, saying it left the nation less safe. But he did not demand that the security apparatus of the United States ignore the decision.

    So that's good.

    Probably Trump will continue to defer to the judiciary in that way. He has not been as aggressive about seizing power as he could have been, if he were set upon maximizing executive power.

    But if, improbably, Trump in the future decides to continue with an order that a judge is attempting to halt -- if, for some reason, Trump decides to insist that the executive branch disregard what he sees as an unwise and unjust judicial decision -- then quite suddenly our democracy would be comprised.

    Democracy depends on the improbable capacity of a few people who sit in courtrooms and study the law to convince large groups of people with guns to do things that those people with guns might not want to do, including things that the people with guns regard as contrary to the best interest of their country and the safety of their communities. It's quite amazing. A few people in black robes -- perhaps themselves with divided opinions -- versus the righteous desires of an army.

    If Trump says do this, and a judge in Hawaii says no, stop, and then Trump says army of mine, ignore that judge, what will the people with the guns do?

    It won’t happen. I don’t think it will happen.

    We as a country have chosen to wager our democracy on Trump's restraint and good judgment.

    [image source]

    Tuesday, May 02, 2017

    Is My Usage of "Crazy" Ableist?

    In 2014, I published a paper titled "The Crazyist Metaphysics of Mind". Since the beginning, I have been somewhat ambivalent about my use of the word "crazy".

    Some of my friends have expressed the concern that my use of "crazy" is ableist. I do agree that the use of "crazy" can be ableist -- for example, when it is used to insult or dismiss someone with a perceived psychological disability.

    I have a new book contract with MIT Press. The working title of the book is "How to Be a Crazy Philosopher". Some of my friends have urged me to reconsider the title.

    I disagree that the usage is ableist, but I am open to being convinced.

    I define a position as "crazy" just in case (1) it is highly contrary to common sense, and (2) we are not epistemically compelled to believe it. "Crazyism" about some domain is the view that something that meets conditions (1) and (2) must be true in that domain. I defend crazyism about the metaphysics of mind, and in some other areas. In these areas, something highly contrary to common sense must be true, but we are not in a good epistemic position to know which of the "crazy" possibilities is the true one. For example, panpsychism might be true, or the literal group consciousness of the United States, or the transcendental ideality of space, or....

    I believe that this usage is not ableist in part because (a) I am using the term with a positive valence, (b) I am not labeling individual people, and (c) the term is often used with a positive valence in our culture when it is not used to label people (e.g., "that's some crazy jazz!", "we had a crazy good time in Vegas"). I'm inclined to think that usages like those are typically morally permissible and not objectionably ableist.

    I welcome discussion, either in comments on this post or by email, if you have thoughts about this.

    Update: On my public post on Facebook, Daniel Estrada writes:

    I think the critical thing is to explicitly acknowledge and appreciate how the term "crazy" has been used to stigmatize and mystify issues around mental health. I don't think it's wrong to use any term, as long as you appreciate its history, and how your use contributes to that history. I think the overlap on "mystification" in your use is the extra prickly thorn in this nest. Contributing an essay (maybe just the preface?) where you address these complications explicitly seems like basic due diligence.

    I like that idea. If I keep the title and the usage, perhaps we can premise further discussion on the assumption that I do something like what Daniel has suggested.

    My Next Book...

    I've signed a contract with MIT Press for my next book. Working title: How to Be a Crazy Philosopher.

    The book will collect, revise, and to some extent integrate selected blog posts, op-eds, and longform journalism pieces, plus some new material. It will not be thematically unified around "crazyism" although of course it will include some of my material on that theme.

    Readers, if any my posts have struck you as especially memorable and worth including, I'd be interested to hear your opinion, either in the comments to this post or by email.

    -----------------------------------

    Some friends have expressed concerns about my use of "crazy" in the working title, since they view the usage as ableist. I am ambivalent about my use of the word, though I have been on the hook for it since at least 2014, when I published "The Crazyist Metaphysics of Mind". I will now create a separate post for discussion of that issue.

    Thursday, April 27, 2017

    The Happy Coincidence Defense and The-Most-I-Can-Do Sweet Spot

    Here are four things I care intensely about: being a good father, being a good philosopher, being a good teacher, and being a morally good person. It would be lovely if there were never any tradeoffs among these four aims.

    Explicitly acknowledging such tradeoffs is unpleasant -- sufficiently unpleasant that it's tempting to try to rationalize them away. It's distinctly uncomfortable to me, for example, to acknowledge that I would probably be better as a father if I traveled less for work. (I am writing this post from a hotel room in England.) Similarly uncomfortable is the thought that the money I'll be spending on a family trip to Iceland this summer could probably save a few people from death due to poverty-related causes, if given to the right charity.

    Today I'll share two of my favorite techniques for rationalizing the unpleasantness away. Maybe you'll find these techniques useful too!

    The Happy Coincidence Defense. Consider travel for work. I don't have to travel around the world, giving talks and meeting people. It's not part of my job description. No one will fire me if I don't do it, and some of my colleagues do it considerably less than I do. On the face of it, I seem to be prioritizing my research career at the cost of being a somewhat less good father, teacher, and global moral citizen (given the luxurious use of resources and the pollution of air travel).

    The Happy Coincidence Defense says, no, in fact I am not sacrificing these other goals at all! Although I am away from my children, I am a better father for it. I am a role model of career success for them, and I can tell them stories about my travels. I have enriched my life, and then I can mingle that richness into theirs. I am a more globally aware, wiser father! Similarly, although I might cancel a class or two and de-prioritize my background reading and lecture preparation, since research travel improves me as a philosopher, it improves my teaching in the long run. And my philosophical work, isn't that an important contribution to society? Maybe it's important enough to morally justify the expense, pollution, and waste: I do more good for the world traveling around discussing philosophy than I could do leading a more modest lifestyle at home, donating more money to charities, and working within my own community.

    After enough reflection of this sort, it can come to seem that I am not making any tradeoffs at all among these four things I care intensely about. Instead, I am maximizing them all! This trip to England is the best thing I can do, all things considered, as a philosopher and as a father and as a teacher and as a citizen of the moral community. Yay!

    Now that might be true. If so, that would be a happy coincidence. Sometimes there really are such happy coincidences. But the pattern of reasoning is, I think you'll agree, suspicious. Life is full of tradeoffs among important things. One cannot, realistically, always avoid hard choices. Happy Coincidence reasoning has the odor of rationalization. It seems likely that I am illegitimately convincing oneself that something I want to be true really is true.

    The-Most-I-Can-Do Sweet Spot. Sometimes people try so hard at something that they end up doing worse as a result. For example, trying too hard to be a good father might make you in a father who is overbearing, who hovers too much, who doesn't give his children sufficient distance and independence. Teaching sometimes goes better when you don't overprepare. And sometimes, maybe, moral idealists push themselves so hard in pursuit of their ideals that they would have been better off pursuing a more moderate, sustainable course. For example, someone moved by the arguments for vegetarianism who immediately attempts the very strictest veganism might be more likely to revert to cheeseburger eating after a few months than someone who sets their sights a bit lower.

    The-Most-I-Can-Do Sweet Spot reasoning harnesses these ideas for convenient self-defense: Whatever I'm doing right now is the most I can realistically, sustainably do! Were I to try any harder to be a good father, I would end up being a worse father. Were I to spend any more time reading and writing philosophy than I actually do, I would only exhaust myself. If I gave any more to charity, or sacrificed any more for the well-being of others in my community, then I would... I would... I don't know, collapse from charity-fatigue? Or seethe so much with resentment at how more awesomely moral I am than everyone else that I'd be grumpy and end up doing some terrible thing?

    As with Happy Coincidence reasoning, The-Most-I-Can-Do Sweet Spot reasoning can sometimes be right. Sometimes you really are doing the most you can do about everything you care intensely about. But it would be kind of amazing if this were reliably the case. It wouldn't be that hard for me to be a somewhat better father, or to give somewhat more to my students -- with or without trading off other things. If I reliably think that wherever I happen to be in such matters, that's the Sweet Spot, I am probably rationalizing.

    Having cute names for these patterns of rationalization better helps me spot them as they are happening, I think -- both in myself and sometimes, I admit, somewhat uncharitably, also in others.

    Rather than think of something clever to say as the kicker for this post, I think I'll give my family a call.

    Friday, April 21, 2017

    Common Sense, Science Fiction, and Weird, Uncharitable History of Philosophy

    Philosophers have three broad methods for settling disputes: appeal to "common sense" or culturally common presuppositions, appeal to scientific evidence, and appeal to theoretical virtues like simplicity, coherence, fruitfulness, and pragmatic value. Some of the most interesting disputes are disputes in which all three of these broad methods are problematic and seemingly indecisive.

    One of my aims as a philosopher is to intervene on common sense. "Common sense" is inherently conservative. Common sense used to tell us that the Earth didn't move, that humans didn't descend from ape-like ancestors, that certain races were superior to others, that the world was created by a god or gods of one sort or another. Common sense is a product of biological and cultural evolution, plus the cognitive and social development of people in a limited range of environments. Common sense only has to get things right enough, for practical purposes, to help us manage the range of environments to which we are accustomed. Common sense is under no obligation to get it right about the early universe, the microstructure of matter, the history of the species, future technologies, or the consciousness of weird hypothetical systems we have never encountered.

    The conservativism and limited vision of common sense leads us to dismiss as "crazy" some philosophical and scientific views that might in fact be true. I've argued that this is especially so regarding theories of consciousness, about which something crazy must be true. For example: literal group consciousness, panpsychism, and/or the failure of pain to supervene locally. Although I don't believe that existing arguments decisively favor any of those possibilities, I do think that we ought to restrain our impulse to dismiss such views out of hand. Fit with common sense is one important factor in evaluating philosophical claims, especially when direct scientific evidence and considerations of general theoretical virtue are indecisive, but it is only one factor. We ought to be ready to accept that in some philosophical domains, our commonsense intuitions cannot be entirely preserved.

    Toward this end, I want to broaden our intuitive sense of the possible. The two best techniques I know are science fiction and cross-cultural philosophy.

    The philosophical value of science fiction consists not only in the potential of science fictional speculations to describe possible futures that we might actually encounter. Historically, science fiction has not been a great predictor of the future. The primary philosophical value of science fiction might rather consist in its ability to flex our minds and disrupt commonsense conservatism. After reading far-out stories about weird utopias, uploading into simulated realities, bizarrely constructed intelligent aliens, body switching, Matrioshka Brains, and alternative universes, philosophical speculations about panpsychism and group consciousness no longer seem quite so intolerably weird. At least that's my (empirically falsifiable) conjecture.

    Similarly, brain-flexing is an important part of the value of reading the history of philosophy -- especially work from traditions other than those with which you are already familiar. Here it's especially important not to be too "charitable" (i.e. assimilative). Relish the weirdness -- "weird" from your perspective! -- of radical Buddhist metaphysics, of medieval Chinese neo-Confucianism, of neo-Platonism in late antiquity, of 19th century Hegelianism and neo-Hegelianism.

    If something that seems crazy must be true about the metaphysics of consciousness, or about the nature of objects and causes, or about the nature of moral value -- as extended philosophical discussions of these topics suggest probably is the case -- then to evaluate the possibilities without excess conservatism, we need to get used to bending our minds out of their usual ruts.

    This is my new favorite excuse for reading Ted Chiang, cyberpunk, and Zhuangzi.

    [image source]

    Friday, April 14, 2017

    We Who Write Blogs Recommend... Blogs!

    Here's The 20% Statistician, Daniel Lakens, on why blogs have better science than Science.

    Lakens observes that blogs (usually) have open data, sources, and materials; open peer review; no eminence filter; easy error correction; and open access.

    I would add that blogs are designed to fit human cognitive capacities. To reach a broad audience, they are written to be broadly comprehensible -- and as it turns out, that's a good thing for science (and philosophy), since it reduces the tendency to hide behind jargon, technical obscurities, and dubious shared subdisciplinary assumptions. The length of a typical substantive blog post (500-1500 words) is also, I think, a good size for human cognition: long enough to have some meat and detail, but short enough that the reader can keep the entire argument in view. These features make blog posts much easier to critique, enabling better evaluation by specialists and non-specialists alike.

    Someone will soon point out, for public benefit, the one-sidedness of Lakens' and my arguments here.

    [HT Wesley Buckwalter]

    Sunday, April 09, 2017

    Does It Matter If the Passover Story Is Literally True?

    My opinion piece in today's LA Times.

    You probably already know the Passover story: How Moses asked Pharoah to let his enslaved people leave Egypt, and how Moses’ god punished Pharaoh — bringing about the death of the Egyptians’ firstborn sons even as he passed over Jewish households. You might even know the ancillary tale of the Passover orange. How much truth is there in these stories? At synagogues this time of year, myth collides with fact, tradition with changing values. Negotiating this collision is the puzzle of modern religion.

    Passover is a holiday of debate, reflection, and conversation. Last Passover, as my family and I and the rest of the congregation waited for the feast at our Reform Jewish temple, our rabbi prompted us: “Does it matter if the story of Passover isn’t literally true?”

    Most people seemed to shake their heads. No, it doesn’t matter.

    I was imagining the Egyptians’ sons. I am an outsider to the temple. My wife and teenage son are Jewish, but I am not. My 10-year-old daughter, adopted from China at age 1, describes herself as “half Jewish.”

    I nodded my head. Yes, it does matter if the Passover story is literally true.

    “Okay, Eric, why does it matter?” Rabbi Suzanne Singer handed me the microphone.

    I hadn’t planned to speak. “It matters,” I said, “because if the story is literally true, then a god who works miracles really exists. It matters if there is such a god or not. I don’t think I would like the moral character of that god, who kills innocent Egyptians. I’m glad there is no such god.”

    “It is odd,” I added, “that we have this holiday that celebrates the death of children, so contrary to our values now.”

    The microphone went around, others in the temple responding to me. Values change, they said. Ancient war sadly and necessarily involved the death of children. We’re really celebrating the struggle for freedom for everyone....

    Rabbi Singer asked if I had more to say in response. My son leaned toward me. “Dad, you don’t have anything more to say.” I took his cue and shut my mouth.

    Then the Seder plates arrived with the oranges on them.

    Seder plates have six labeled spots: two bitter herbs, charoset (fruit and nuts), parsley, a lamb bone, a boiled egg, each with symbolic value. There is no labeled spot for an orange.

    The first time I saw an orange on a Seder plate, I was told this story about it: A woman was studying to be a rabbi. An orthodox rabbi told her that a woman belongs on the bimah (pulpit) like an orange belongs on the Seder plate. When she became a rabbi, she put an orange on the plate.

    A wonderful story — a modern, liberal story. More comfortable than the original Passover story for a liberal Reform Judaism congregation like ours, proud of our woman rabbi. The orange is an act of defiance, a symbol of a new tradition that celebrates gender equality.

    Does it matter if it’s true?

    Here’s what actually happened. Dartmouth Jewish Studies professor Susannah Heschel was speaking to a Jewish group at Oberlin College in Ohio. The students had written a story in which a girl asks a rabbi if there is room for lesbians in Judaism, and the rabbi rises in anger, shouting, “There’s as much room for a lesbian in Judaism as there is for a crust of bread on the Seder plate!” Heschel, inspired by the students but reluctant to put anything as unkosher as leavened bread on the Seder plate, used a tangerine instead.

    The orange, then, is not a wild act of defiance, but already a compromise and modification. The shouting rabbi is not an actual person but an imagined, simplified foe.

    It matters that it’s not true. From the two stories of the orange, we learn the central lesson of Reform Judaism: that myths are cultural inventions built to suit the values of their day, idealizations and simplifications, changing as our values change — but also that only limited change is possible in a tradition-governed institution. An orange, but not a crust of bread.

    In a way, my daughter and I are also oranges: a new type of presence in a Jewish congregation, without a marked place, welcomed this year, unsure we belong, at risk of rolling off.

    In the car on the way home, my son scolded me: “How could you have said that, Dad? There are people in the congregation who take the Torah literally, very seriously! You should have seen how they were looking at you, with so much anger. If you’d said more, they would practically have been ready to lynch you.”

    Due to the seating arrangement, I had been facing away from most of the congregation. I hadn’t seen those faces. Were they really so outraged? Was my son telling me the truth on the way home that night? Or was he creating a simplified myth of me?

    In belonging to an old religion, we honor values that are no longer entirely ours. We celebrate events that no longer quite make sense. We can’t change the basic tale of Passover. But we can add liberal commentary to better recognize Egyptian suffering, and we can add a new celebration of equality.

    Although the new celebration, the orange, is an unstable thing atop an older structure that resists change, we can work to ensure that it remains. It will remain only if we can speak the story of it compellingly enough to give our new values too the power of myth.

    -------------------------------------

    Revised and condensed from my blogpost Orange on the Seder Plate (Apr 27, 2016).

    Wednesday, April 05, 2017

    Only 4% of Editorial Board Members of Top-Ranked Anglophone Philosophy Journals Are from Non-Anglophone Countries

    If you're an academic aiming to reach a broad international audience, it is increasingly the case that you must publish in English. Philosophy is no exception. This trend gives native English speakers an academic advantage: They can more easily reach a broad international audience without having to write in a foreign language.

    A related question is the extent to which people who make their academic home in Anglophone countries control the English-language journals in which so much of our scholarly communication takes place. One could imagine the situation either way: Maybe the most influential academic journals in English are almost exclusively housed in Anglophone countries and have editorial boards almost exclusively composed of people in those same countries; or maybe English-language journals are a much more international affair, led by scholars from a diverse range of countries.

    To examine this question, I looked at the editorial boards of the top 15 ranked journals in Brian Leiter's 2013 poll of "top philosophy journals without regard to area". I noted the primary institution of every board member. (For methodological notes see the supplement at the end.)

    In all, 564 editorial board members were included in the analysis. Of these, 540 (96%) had their primary academic affiliation with an institution in an Anglophone country. Only 4% of editorial board members had their primary academic affiliation in a non-Anglophone country.

    The following Anglophone countries were represented:

    USA: 377 philosophers (67% of total)
    UK: 119 (21%)
    Australia: 26 (5%)
    Canada: 13 (2%)
    New Zealand: 5 (1%)

    The following non-Anglophone countries were represented:

    Germany: 6 (1%)
    Sweden: 5 (1%)
    Netherlands: 3 (1%)
    China (incl. Hong Kong): 2 (<1%)
    France: 2 (<1%)
    Belgium: 1 (<1%)
    Denmark: 1 (<1%)
    Finland: 1 (<1%)
    Israel: 1 (<1%)
    Singapore: 1 (<1%) [N.B.: English is one of four official languages]
    Spain: 1 (<1%)

    Worth noting: Synthese showed much more international participation than any of the other journals, with 13/31 (42%) of its editorial board from non-Anglophone countries.

    It seems to me that if English is to continue in its role as the de facto lingua franca of philosophy (ironic foreign-language use intended!), then the editorial boards of the most influential journals ought to reflect substantially more international participation than this.

    -------------------------------------------------

    Related Posts:

    How Often Do Mainstream Anglophone Philosophers Cite Non-Anglophone Sources? (Sep 8, 2016)

    SEP Citation Analysis Continued: Jewish, Non-Anglophone, Queer, and Disabled Philosophers (Aug 14, 2014)

    -------------------------------------------------

    Methodological Notes:

    The 15 journals were Philosophical Review, Journal of Philosophy, Nous, Mind, Philosophy & Phenomenological Research, Ethics, Philosophical Studies, Australasian Journal of Philosophy, Philosopher's Imprint, Analysis, Philosophical Quarterly, Philosophy & Public Affairs, Philosophy of Science, British Journal for the Philosophy of Science, and Synthese. Some of these journals are "in house" or have a regional focus in their editorial boards. I did not exclude them on those grounds. It is relevant to the situation that the two top-ranked journals on this list are edited by the faculty at Cornell and Columbia respectively.

    I excluded editorial assistants and managers without without full-time permanent academic appointments (which are typically grad students or publishing or secretarial staff). I included editorial board members, managers, consultants, and staff with full-time permanent academic appointments, including emeritus.

    I used the institutional affiliation listed at the journal's "editorial board" website when that was available (even in a few cases where I knew the information to be no longer current), otherwise I used personal knowledge or a web search. In each case, I tried to determine the individual's primary institutional affiliation or most recent primary affiliation for emeritus professors. In a few cases where two institutions were about equally primary, I used the first-listed institution either on the journal's page or on a biographical or academic source page that ranked highly in a Google search for the philosopher.

    I am sure I have made some mistakes! I've made the raw data available here. I welcome corrections. However, I will only make corrections in accord with the method above. For example, it is not part of my method to update inaccurate affiliations on the journals' websites. Trying to do so would be unsystematic, disproportionately influenced by blog readers and people in my social circle.

    A few mistakes are inevitable in projects of this sort and shouldn't have a large impact on the general findings.

    -------------------------------------------------

    [image source]

    Thursday, March 30, 2017

    On Being Accused of Ableism

    Like many (most?) 21st-century North Americans, I hate to be told I’ve done something ableist (or racist, or sexist). Why does it sting so much, and how should I think about such a charge, when it is leveled against me?

    Short answer: It stings so much because it’s usually partly, if only partly, true—and partly true criticisms are the ones that sting worst. And the best reaction to the charge is, usually, to recognize its partial, if only partial, truth.

    First, let’s remind ourselves of a quote from the great Confucius:

    How fortunate I am! If I happen to make a mistake, others are sure to inform me.
    (Analects 7.31, Slingerland trans.)

    (As it happens, bloggers are fortunate in just the same way.)

    Confucius might have been speaking partly ironically in that particular passage. A couple of centuries later, another Confucian, Xunzi, speaks not at all ironically:

    He who rightly criticizes me acts as a teacher to me, and he who rightly supports me acts as friend to me, while he who flatters and toadies to me acts as a villain toward me. Accordingly, the gentleman exalts those who act as teachers toward him....
    (ch 2, Hutton trans., p. 9)

    This is difficult advice to heed.

    Note, though: If I make a mistake. He (she, they) who rightly criticizes me. Someone who criticizes me wrongly is no teacher, only an annoying pest! And if you’re anything like me, then your gut reaction to charges of ableism will usually be to want to swat back at the pest, to assume, defensively, that the criticism must be off-target, because of course you’re a good egalitarian, committed to fighting unjustified prejudice!

    No. Here’s the thing. We all have ableist reactions and engage in ableist practices sometimes, to some degree. Disability is so various, and the ableist structures of our culture so deep and pervasive, that it would be superhuman to be immune. Maybe you are immune to ableism toward people who use wheelchairs. Maybe your partner of many years uses a wheelchair and you see wheelchair-use as just one of the many diverse human ways of comporting oneself, with its challenges and (sometimes) benefits, just like every other way of getting around. But how do you react to someone who stutters? How do you react to someone who is hard of hearing? How do you react to someone with depression or PTSD? Someone with facial burns or another skin condition you find unappealing? Or a very short man? What sorts of social structures do you manifest and reinforce in your behavior? In your choice of words? In your implicit assumptions? In what you expect (and don’t expect) people to be able to do?

    Here’s my guess: You don’t always act in ways that are free of unjustified prejudice. If someone calls you out on ableism, they might well be right.

    You might sincerely and passionately affirm that "all people are equal"—whatever that amounts to, which is really hard to figure out!—and you might even pay some substantial personal costs for the sake of a more just and equal society. In this respect, you are not ableist. You are even anti-ableist. But you are not a unified thing. Unless you are an angel walking upon the Earth, our society’s ableism acts through you.

    An absurd charge does not sting. If someone tells me I spend too much time watching soccer, the charge is merely ridiculous. I don’t watch soccer. But if someone charges me with ableism, the partial truth of it does sting, or at least the plausibility of it stings. Maybe I shouldn’t have used the particular word that I used. Maybe I shouldn’t have made that particular assumption or dismissed that particular person. Maybe, deep down, I’m not the egalitarian I thought I was. Ouch.

    Your ableist actions and reactions can be hard to recognize and admit if you implicitly assume that people have unified attitudes. If people have unified attitudes, they are either prejudiced against disabled people or they are not. If people have unified attitudes, then evidence of ableist behavior is evidence that you are one of the prejudiced, one of the bad guys. No one wants to think that about themselves. If people have unified attitudes, then it’s easy to assume that because you explicitly reject ableism you cannot be simultaneously enacting the very ableism that you are fighting against.

    [Image description: psychedelic art "shifting realities", explosion of mixing colors, white on right through blue on the left]

    The best empirical evidence suggests that people are highly disunified—inconstant across situations, capable of both great sacrifice and appalling misbehavior, variable in word and deed, spontaneously enacting our cultural practices for both good and bad. If this is true, then you ought to expect that charges of ableism against you will sometimes stick. You should be unsurprised if they do. But you should also celebrate that these charges are only very partial: The whole you is not like that! The whole you is a tangled chaos with many beautiful, admirable parts!

    If you accept your disunity, you ought also to be forgiving. You ought to be forgiving especially if you cast your eye more broadly to the many forms of prejudice and injustice in which we participate. Suppose, impossibly, that you were utterly free of any ableist tendencies, practices, or background assumptions. It would be a huge life project to achieve that. Are you equally free of racism, classism, sexism, ageism, bias against those who are not conventionally beautiful? Are you saving the environment, fighting international poverty, phoning your senators about prisons and wage justice, volunteering in your community?

    We must pick our projects. A more vivid appreciation of our own disunity, flaws, and abandoned good intentions ought to make us both more ready to see the truth in charges of prejudice against us and also more forgiving of the disunity, flaws, and abandoned good intentions in others.

    [image source]

    [Cross-posted at Discrimination and Disadvantage; HT Shelley Tremain for the invitation and editorial feedback]

    Wednesday, March 22, 2017

    What Kinds of Universities Lack Philosophy Departments? Some Data

    University administrators sometimes think it's a good idea to eliminate their philosophy departments. Some of these efforts have been stopped, others not. This has led me to wonder how prevalent philosophy departments are in U.S. colleges and universities, and how their presence or absence relates to institution type.

    Here's what I did. I pulled every ranked college and university from the famous US News college ranking site, sorting them into four categories: national universites, national liberal arts colleges, regional universities (combining the four US News categories for regional universities: north, south, midwest, and west), and regional colleges (again combining north, south, midwest, and west). I randomly selected twenty schools from each of these four lists. Then I attempted to determine from the school's website whether it had a philosophy department and a philosophy major. [See note 1 on "departments".]

    Since some schools combine philosophy with another department (e.g. "Philosophy and Religion") I distinguished standalone philosophy departments from combined departments that explicitly mention "philosophy" in the department name along with something else.

    I welcome corrections! The websites are sometimes a little confusing, so it's likely that I've made an error or two.

    ***************************************************

    Results

    National Universities:

    Eighteen of the twenty sampled "national universities" have standalone philosophy departments (or equivalent: note 1) and majors. The only two that do not are institutes of technology: Georgia Tech (ranked #34) and Florida Tech (#171).

    Virginia Tech (#74), however, does have a Department of Philosophy and a philosophy major -- as do Stanford, Duke, Rice, Rochester, Penn State, UT Austin, Rutgers-New Brunswick, Baylor, U Mass Amherst, Florida State, Auburn, Kansas, Biola, Wyoming (for now), North Carolina-Charlotte, Missouri-St Louis, and U Mass Boston.

    National Liberal Arts Colleges:

    Similarly, seventeen of the twenty sampled "national liberal arts colleges" have standalone philosophy departments, and eighteen offer the philosophy major. Offering neither department nor major are Virginia Military Institute (#72) and the very small science/engineering college Harvey Mudd (#21) (circa 735 students, part of the Claremont consortium). Beloit College (#62, circa 1358 students) offers the philosophy major within a "Department of Philosophy and Religious Studies".

    The seventeen sampled schools with both major and standalone department are: Swarthmore, Carleton, Hamilton, Wesleyan, Richmond, DePauw, Puget Sound, Westmont, Hollins, Lake Forest, Stonehill, Hanover, Guilford, Carthage, Oglethorpe, Franklin (not to be confused with Franklin & Marshall), and Georgetown College (not to be confused with Georgetown University).

    Some of these colleges are very small. According to Wikipedia estimates, two have fewer than a thousand students: Hollins (639) and Georgetown (984). Another four are below 1300: Franklin (1087), Hanover (1133), Oglethorpe (1155), and Westmont (1298).

    Regional Universities:

    Nine of the twenty sampled regional universities have standalone philosophy departments, and another three have a combined department with philosophy in its name. Twelve offer the philosophy major (not exactly the same twelve). Seven offer neither major nor department: Ramapo College of New Jersey, Wentworth Institute of Technology, Delaware Valley University, Stephens College, Mount St Joseph, Elizabeth City State, and Robert Morris. Two of these are specialty schools: Wentworth is a technical institute, and Stephens specializes in arts and fashion.

    Offering major and/or standalone or combined department: Simmons, Whitworth, Mansfield of Pennsylvania, Rosemont, U of Northwestern-St Paul, Central Washington, Towson, Ganon, North Park, Wisconsin-Oshkosh, Northern Michigan, Mount Mary, and Appalachian State.

    Regional Colleges:

    Seven of the twenty sampled regional colleges have a standalone philosophy department, and another four have a combined department with philosophy in its name. Seven offer a philosophy major, and one (Brevard) has a "Philosophy and Religion" major. Offering neither major nor department: California Maritime Academy, Marymount California U (not to be confused with Loyola Marymount), Paul Smith's College (not to be confused with Smith College), Alderson Broaddus, Dickinson State, North Carolina Wesleyan, Crown College, and Iowa Wesleyan. Four of these are specialty schools: California Maritime Academy and Marymount California each offer only six majors total, Paul Smith's focuses on tourism and service industries, and Iowa Wesleyan offers only three Humanities majors: Christian Studies, Digital Media Design, and Music.

    Offering major and/or standalone or combined department: Carroll, Mount Union, Belmont Abbey, La Roche, St Joseph's, Blackburn, Messiah, Tabor, Ottawa University (not to be confused with University of Ottawa), Northwestern College (not to be confused with Northwestern University), and Cazenovia College.

    Summary

    In my sample of forty nationally ranked universities and liberal arts colleges, each one has a standalone philosophy department and offers a philosophy major, with the following exceptions: three science/engineering specialty schools, one military institute, and one school offering a philosophy major within a department of "Philosophy and Religious Studies".

    Even among the smallest nationally ranked liberal arts colleges, with 1300 or fewer students, all have philosophy majors and standalone philosophy departments (or similar administrative units), with the exception of one science/engineering speciality college.

    The schools that US News describes as "regional" are mixed. In this sample of forty, about half offer philosophy majors and about half have standalone philosophy departments. Among the fifteen with neither department nor major in philosophy, six are specialty schools.

    I'll refrain from drawing causal or normative conclusions here.

    ***************************************************

    Update 8:53 a.m.: Expanding the Sample:

    I'm tempted to conclude that, with the exception of specialty schools, almost every nationally ranked university and liberal arts college, no matter how small, has a philosophy major and a large majority have a standalone philosophy department. But maybe that's too strong a claim to draw from a sample of forty? So I've doubled the sample.

    Doubling the sample supports this claim. Among the additional twenty universities sampled, nineteen offer the philosophy major, and the one that does not, UC Merced, is a new campus that plans to add the philosophy major soon. Sixteen have standalone Philosophy Departments, and three have combined departments: Philosophy and Religion at Northeastern and Tulsa, Politics and Philosophy at University of Idaho. The sampled universities with both standalone philosophy departments and the philosophy major are Tennessee, Nevada-Reno, Colorado State, South Dakota, New Mexico, Dartmouth, UC San Diego, U of Oregon, Columbia, Indiana-Bloomington, Kentucky, Alabama-Huntsville, Brandeis, George Washington, Azusa Pacific, and UC Riverside.

    Adding twenty more nationally ranked liberal arts colleges also confirms my initial results. Nineteen offer the major, with the only exception being Thomas Aquinas College, which appears to offer only one major to all students (Liberal Arts). Three colleges have combined departments, all with religion: Washington College, Wartburg, and College of Idaho. Sixteen have both major and standalone department: Wooster, Wheaton, Hampton-Sydney, Muhlenberg, Houghton, Colgate, Middlebury, Washington & Lee, New College of Florida, Transylvania, Sweet Briar, Knox College, Colorado College, Oberlin, Luther, and Pomona.

    ***************************************************

    Note 1: Some schools don't appear to have "departments" or have very broad "departments" that encompass many majors. If a school had fewer than fifteen "departments" I attempted to assess whether it had a department-like administrative unit for philosophy, or if that assessment wasn't possible, whether it hosted a philosophy major apparently on administrative par with popular majors like psychology and biology.

    [image source]

    Thursday, March 16, 2017

    My Defense of Anger and Empathy: Flanagan's, Bloom's, and Others' Responses

    Last week I posted a defense of anger and empathy against recent critiques by Owen Flanagan and Paul Bloom. The post drew a range of lively responses in social media, including from Flanagan and Bloom themselves.

    My main thought was just this: Empathy and anger are part of the rich complexity of our emotional lives, intrinsically valuable insofar as having rich emotional lives is intrinsically valuable.

    We can, of course, also debate the consequences of empathy and anger, as Flanagan and Bloom do -- and if the consequences of one or the other are bad enough we might be better off in sum without them. But we shouldn't look only at consequences. There is also an intrinsic value in having a rich emotional life, including anger and empathy.


    1. Adding Nuance.

    I have presented Flanagan's and Bloom's views simply: Flanagan and Bloom argue against anger and empathy, respectively. Their detailed views are more nuanced, as one might expect. One interpretive question is whether it is fair to set aside this nuance in critiquing their views.

    Well, how do they themselves summarize their views?

    Flanagan argues in defense of the Stoic and Buddhist program of entirely "eliminating" or "extirpating" anger, against mainstream "containment" views which hold that anger is a virtue when it is moderate, appropriate to the situation, and properly contained (p. 160). Although this is where he puts his focus and energy, he adds a few qualifications like this: "I do not have a firm position [about the desirability of entirely extirpating anger]. I am trying to explore varieties of moral possibility that we rarely entertain, but which might be genuine possibilities for us" (p. 215).

    Bloom titles his book Against Empathy. He says that "if we want to make the world a better place, then we are better off without empathy" (p. 3) and "On balance, empathy is a negative in human affairs" (p. 13). However, Bloom also allows that he wouldn't want to live in a world without empathy, anger, shame, or hate (p. 9). At several points, he accepts that empathy can be pleasurable and play a role in intimate relationships.

    It's helpful to distinguish between the headline view and the nuanced view.

    Here's what I think the typical reader -- including the typical academic reader -- recalls from their reading, two weeks later: one sentence. Maybe "Bloom is against empathy because it's so biased and short-sighted". Maybe "Flanagan thinks we should try to eliminate anger, like a Buddhist or Stoic sage". These are simplifications, but they come close enough to how Bloom and Flanagan summarize and introduce their positions that it's understandable if that's how readers remember their views. In writing academic work, especially academic work for a broad audience, it's crucial to keep our eye on the headline view -- the practical, memorable takeaway that is likely to be the main influence on readers' thoughts down the road.

    As an author, you are responsible for both the headline view and the nuanced view. Likewise, as a critic, I believe it's fair to target the headline view as long as one also acknowledges the nuance beneath.

    In their friendly replies on social media, both Bloom and Flanagan seemed to acknowledge the value of engaging first at the headline level; but they both also pushed me on the nuance.

    Hey, before I go farther, let me not forget to be friendly too! I loved both these books. Of course I did. Otherwise, I wouldn't have spent my time reading them cover-to-cover and critiquing them. Bloom and Flanagan challenge my presuppositions in helpful ways, and my thinking has advanced in reacting to them.

    For more on the downsides of nuance, see Kieran Healy.

    2. Bloom's Response.

    In this tweet, Bloom appears to be suggesting that empathy is fine as long as you don't use it to guide moral judgment. (He makes a similar claim in a couple of Facebook comments on my post.) Similarly, at the end of his book, he says he worries "that I have given the impression that I am against empathy" (p. 240). An understandable worry, given the title of his book! (I am sure he is aware of this and speaking partly tongue in cheek.) He clarifies that he is against empathy "only in the moral domain... but there is more to life than morality" (p. 240-241). Empathy, he says, can be an immense source of pleasure.

    The picture seems to be that the world would be morally better without empathy, but that there can be excellent selfish reasons to want to experience empathy nonetheless.

    If the picture here is that there are some decisions to which morality is irrelevant and that it's fine to be guided by empathy in those decisions, I would object as follows. Every decision is a moral decision. Every dollar you spend on yourself is a dollar that could instead be donated to a good cause. Every minute you spend is a minute you could have done something more kind or helpful than what you actually did. Every person you see, you could greet warmly or grumpily, give them a kind word or not bother. Of course, it's exhausting to think this way! But still, there is I believe no such thing as a morally innocent choice. If you purge empathy from moral decision-making you purge it from decision-making.

    Here's what seems closer to right, to me -- and what I think is one of the great lessons of Bloom's book. Public policy decisions and private acts directed toward distant strangers (e.g., what charities to support) are perhaps, on average, better made in a mood of cool rationality, to the extent that is possible. But it's different for personal relationships. Bloom argues that empathy might make us "too-permissive parents and too-clingy friends" (p. 163). This is a possible risk, sure. Sometimes empathic feelings should be set aside or even suppressed. Of course, there are risks to attempting to set aside empathy in favor of cool rationality as well (see, e.g., Lifton on Nazi doctors). Let's not over-idealize either process! In some cases, it might be morally best to experience empathy and to be able to act otherwise if necessary, rather than not to feel empathy.

    Furthermore, it might be partly constitutive of the syndrome of full-bodied friendship and loving-parenthood that one is prone to empathy. I am Aristotelian or Confucian enough to see the flourishing of such relationships as central to morality.


    3. Flanagan's Response.

    On Facebook, Flanagan also added nuance to his view, writing:

    There are varieties of anger. 1. Payback anger - you hurt me, I hurt you; 2. Pain-passing -- I am hurting (not because of you) I pass pain to you. 3. Instrumental anger. I aim you to get you to do what is right (this might hurt your feelings etc. but that is not my aim; 4 Political anger. I am outraged at racist or sexist etc. practices and want them to end; 5. Impersonal anger. At the gods or heaven for awful states of affairs, the dying child. I am concerned about 1 & 2. I worry about 3-4 if and when the desire to pass pain or payback gets too much of a grip....

    This is helpful -- and also not entirely Buddhist or Stoic (which of course is fine, especially since Flanagan presented his earlier arguments against anger as only something worth exploring rather than his final view).

    In his thinking on this, Flanagan has partly been influenced by Myisha Cherry's and others' work on anger as a force for social change.

    I appreciate the defense of anger as a path toward social justice. But I also want to defend anger's intrinsic value, not just its instrumental value; and specifically I want to defend the intrinsic value of payback anger.

    The angry jerk is an ugly thing. Grumping around, feeling his time is being wasted by the incompetent fools around him, feeling he hasn't been properly respected, enraged when others' ends conflict with his own. He should settle down, maybe try some empathy! But consider, instead, the angry sweetheart.

    I see the "sweetheart" as the opposite of the jerk -- someone who is spontaneously and deeply attuned to the interests, values, and attitudes of other people, full of appreciation, happy to help, quick to believe that he rather than the other might be in the wrong, quick to apologize and in extreme cases sometimes being so attuned to others' perspectives that he risks losing track of his own interests, values, and attitudes. Spongebob Squarepants, Forrest Gump, sweet subordinate sitcom mothers from the 1950s and 1960s. These people don't feel enough anger. We should, I think, cheer their anger when it finally rises. We should let them relish their anger, the sense that they have been harmed and that the wrongdoer should pay them back.

    I don't want sweethearts always to be bodhisattvas toward those who wrong them. Anger manifests the self-respect that they should claim, and it's part of the emotional range of experience that they might have too little of.


    4. More.

    Shoot, I've already gone on longer than intended, and I haven't got to all the comments by others that I'd wanted to address! Just quickly:

    Some people suggested that eliminating anger might result in opening up other different ranges of emotions, in the right kind of sage. Interesting thought! I'd also add that there's a kind of between-person richness that I'd celebrate. If sages can eliminate anger as a great personal and moral accomplishment, I think that's wonderful. My concern is more with the ideal of a blanket extirpation as general advice.

    Some people pointed out that the anger of the oppressed is particularly worth cultivating -- and that there may even be whole communities of oppressed people who feel too little anger. Yes!

    Others wondered about whether I would favor adding brand-new unheard-of negative emotions just to improve our emotional range. This would make a fascinating speculative fiction thought experiment.

    More later, I hope. In addition to the comments section at The Splintered Mind, the public Facebook conversation was lively and fun.

    [image source]

    Friday, March 10, 2017

    Empathy, Anger, and the Richness of Life

    I've been reading books that advise us to try to eliminate whole classes of moral emotions.

    In Against Empathy, Paul Bloom describes empathy as the unhealthy "sugary soda" of morality, best purged from our diets. He argues that as a moral motivator, empathy is much more biased than rational compassion, and it can also motivate excessive aggression in revenge against the harming party. (See also Prinz 2011.)

    In The Geography of Morals, Owen Flanagan recommends that we try to entirely extirpate anger from our lives, as suggested by some of the great Buddhist and Stoic sages. (See also Nussbaum 2016.)

    Flanagan's and Bloom's cases against empathy and anger are mainly practical or instrumental (and not quite as absolute as their summary statements might sound). The costs of these emotions, they suggest, outweigh the benefits. As responses to suffering and injustice, it's simply that other emotional reactions are preferable, both personally and socially. Rational compassion, serenity, hope, thoughtful intervention, reconciliation, cool-headed justice, a helping hand.

    I want to push back against the idea that we should narrow the emotional range of our lives by rejecting empathy and anger. My thought is this: Having a rich emotional range is intrinsically valuable.

    One way of thinking about intrinsic value is to consider what you would wish for, if you knew that there was a planet on the far side of the galaxy, beyond any hope of contact with us. (I discuss this thought experiment here and here.) Would you hope that it's a sterile rock? A planet with microbial life but not multi-cellular organisms? A planet with the equivalent of deer and cows but no creatures of human-like intelligence? Human-like intelligences, but all lying comatose, plugged into simple happiness stimulators?

    Here's what I'd hope for: a rich, complex, multi-layered society of loves and hates, victories and losses, art and philosophy, history, athletics, science, music, literature, feats of engineering, great achievements and great failures. When I think about a flourishing world, I want all of that. And negative emotions, destructive emotions, useless bad stuff, those are part of it. If I imagine a society with rational compassion, but no empathy, no anger -- a serene world populated exclusively by Buddhist and Stoic sages -- I have imagined a lesser world. I have imagined a step away from all the wonderful complexity and richness of life.

    I don't know how to argue for this idea. I can only invite you to consider whether you share it. There would be something flat about a world without empathy or anger.

    Whether individual lives without empathy or anger would be similarly flat is a different question. Maybe they wouldn't be -- especially in a world where extirpating such emotions is a rare achievement, adding to, rather than subtracting from, the diversity and complexity of our human forms of life. But interpreted as general advice, applicable to everyone, the advice to eliminate whole species of emotion is advice to uncolor the world.

    Flanagan comes close to addressing my point when he considers what he calls the "attachment" objection to the extirpation of anger (esp. p. 202-203). The objector says that part of loving someone is being disposed to respond with anger if they are unjustly harmed. Flanagan acknowledges that a readiness to feel some emotions -- sorrow, for example -- might be necessary for loving attachment. But he denies that anger is among those necessary emotions. A person who lacks any disposition to anger can still love. Bloom says something analogous about empathy.

    I'm not sure whether I'd say that one's love is flatter if one would never feel anger or empathy on behalf of one's beloved, but in any case my objection is simpler. It is that part of the glorious richness of life on Earth is our range of intense and varied emotions. To be against a whole class of emotions is to be against part of what makes the world the great and amazing whirlwind it is.

    [image source]

    Wednesday, March 01, 2017

    Why Wide Reflective Equilibrium in Ethics Might Fail

    "Reflective equilibrium" is sometimes treated as the method of ethics (Rawls 1971 is the classic source). In reflective equilibrium, one considers one's judgments, beliefs, or intuitions about particular individual cases (e.g., in such-and-such an emergency would it be bad to push someone in front of an oncoming trolley?). One then compares these judgments about cases with one's judgments about general principles (e.g., act to maximize total human happiness) and one's judgments about other related cases (e.g., in such-and-such a slightly different emergency, should one push the person?). Balance them all together, revising the general principles when that seems best in light of what you regard as secure judgments about the cases, and revising one's judgments about specific cases when that seems best in light of one's judgments about general principles and related cases. Repeat the process, tweaking your judgments about cases and principles until you reach an "equilibrium" in which your judgments about principles and a broad range of cases all fit together neatly. In "wide" equilibrium, you get to toss all other sources of knowledge into the process too -- scientific knowledge, reports of other people's judgments, knowledge about history, etc.

    How could anything be more sensible than that?

    I am inclined to agree that no approach is more sensible. It's the best way to do ethics. But, still, our best way of doing ethics might be irredeemably flawed.

    The crucial problem is this: The process won't bust you out of a bad enough starting point if you're deeply enough committed to that starting point. And we might have bad starting points to which we are deeply committed.

    Consider the Knobheads. This is a species of linguistic, rational, intelligent beings much like us, who live on a planet around a distant star. Babies are born without knobs on their foreheads, but knobs slowly grow starting at age five, and adults are very proud of their knobs. The knobs are simply bony structures, with no function other than what the Knobheads give them in virtue of their prizing of them. Sadly, 5% of children fail to start growing the knobs on their foreheads, despite being otherwise normal. Knobheads are so committed to the importance of the knobs, and the knobs are so central to their social structures, that they euthanize those children. Some Knobhead philosophers ask: Is it right to kill these knobless five-year-olds? They are aware of various ethical principles that suggest that they should not kill those children. And let's suppose that those ethical principles are in fact correct. The Knobheads should, really, ethically, let the knobless children live. But Knobheads are deeply enough committed to the judgment that killing those children is morally required that they are willing to tweak their judgments about general principles and other related cases. "It's just the essence of life as a Knobhead that one has a knob," some say. "It's too disruptive of our social practices to let them live. And if they live, they will consume resources and parental energy that could instead be given to proper Knobhead children." Etc.

    Also consider the Hedons. The Hedons also are much like us and live on a far-away planet. When they think about "experience machine" cases or "hedonium" cases -- cases in which one sacrifices "higher goods" such as knowledge, social interaction, accomplishment, and art for the sake of maximizing pleasure -- they initially react somewhat like most Earthly humans do. That is, their first reaction is that it's better for people to live real, rich lives with risk and suffering than to zap their brains into a constant state of dumb orgasmic pleasure. But unlike most of us, the Hedons give up that judgment after engaging in reflective equilibrium. After considerable reflection, they are captured by the theoretical elegance of simple hedonistic act utilitarianism. As a society, they arrive at the consensus that the best ethical goal would be to destroy themselves as a species in order to transform their planet into a paradise of happy cows. Let's assume that, like the Knobheads, they are in fact ethically wrong to reach this conclusion. (Yes, I am assuming moral realism.)

    It seems possible that wide reflective equilibrium, even ideally practiced, could fail the Knobheads and Hedons. All that needs to be the case is that they are too implacably committed to some judgments that really ought to change (the Knobheads) or that they are insufficiently committed to judgments that ought not to change (the Hedons). To succeed as a method, reflective equilibrium requires that our reflective starting points be approximately well-ordered in the sense that our stronger commitments are normally our better commitments. Otherwise, reflective tweaking might tend to move practitioners away fromrather than toward the moral truth.

    Biological and cultural evolution, it seems, could easily give rise to groups of intelligent beings whose starting points are not well-ordered in that way and for whom, therefore, reflective equilibrium fails.

    Of course, the crucial question is whether we are such beings. I worry that we might be.

    **********************************

    Related:

    How Robots and Monsters Might Break Human Moral Systems (Feb 3, 2015)

    How Weird Minds Might Destabilize Human Ethics (Aug 15, 2015)

    [image source]

    Friday, February 24, 2017

    Call for Papers: Introspection Sucks!

    Centre for Philosophical Psychology and European Network for Sensory Research

    Introspection sucks!

    Conference with Eric Schwitzgebel, May 30, 2017, in Antwerp

    This is a call for papers on any aspect of introspection (and not just papers critical of introspection, but also papers defending it)

    There are no parallel sections. Only blinded submissions are accepted.

    Length: 3000 words. Single spaced!

    Deadline: March 30, 2017. Papers should be sent to nanay@berkeley.edu

    [from Brains Blog]

    Thursday, February 23, 2017

    Belief Is Not a Norm of Assertion (but Knowledge Might Be)

    Many philosophers have argued that you should only assert what you know to be the case (e.g. Williamson 1996). If you don't know that P is true, you shouldn't go around saying that P is true. Furthermore, to assert what you don't know isn't just bad manners; it violates a constitutive norm, fundamental to what assertion is. To accept this view is to accept what's sometimes called the Knowledge Norm of Assertion.

    Most philosophers also accept the view, standard in epistemology, that you cannot know something that you don't believe. Knowing that P implies believing that P. This is sometimes called the Entailment Thesis. From the Knowledge Norm of Assertion and the Entailment Thesis, the Belief Norm of Assertion follows: You shouldn't go around asserting what you don't believe. Asserting what you don't believe violates one of the fundamental rules of the practice of assertion.

    However, I reject the Entailment Thesis. This leaves me room to accept the Knowledge Norm of Assertion while rejecting the Belief Norm of Assertion.

    Here's a plausible case, I think.

    Juliet the implicit racist. Many White people in academia profess that all races are of equal intelligence. Juliet is one such person, a White philosophy professor. She has studied the matter more than most: She has critically examined the literature on racial differences in intelligence, and she finds the case for racial equality compelling. She is prepared to argue coherently, sincerely, and vehemently for equality of intelligence and has argued the point repeatedly in the past. When she considers the matter she feels entirely unambivalent. And yet Juliet is systematically racist in most of her spontaneous reactions, her unguarded behavior, and her judgments about particular cases. When she gazes out on class the first day of each term, she can’t help but think that some students look brighter than others – and to her, the Black students never look bright. When a Black student makes an insightful comment or submits an excellent essay, she feels more surprise than she would were a White or Asian student to do so, even though her Black students make insightful comments and submit excellent essays at the same rate as the others. This bias affects her grading and the way she guides class discussion. She is similarly biased against Black non-students. When Juliet is on the hiring committee for a new office manager, it won’t seem to her that the Black applicants are the most intellectually capable, even if they are; or if she does become convinced of the intelligence of a Black applicant, it will have taken more evidence than if the applicant had been White (adapted from Schwitzgebel 2010, p. 532).

    Does Juliet believe that all the races are equally intelligent? On my walk-the-walk view of belief, Juliet is at best an in-between case -- not quite accurately describable as believing it, not quite accurately describable as failing to believe it. (Compare: someone who is extraverted in most ways but introverted in a few ways might be not quite accurately describable as an extravert nor quite accurately describable as failing to be an extravert.) Juliet judges the races to be equally intelligent, but that type of intellectual assent or affirmation is only one piece of what it is believe, and not the most important piece. More important is how you actually live your life, what you spontaneously assume, how you think and reason on the whole, including in your less reflective, unguarded moments. Imagine two Black students talking about Juliet behind her back: "For all her fine talk, she doesn't really believe that Black people are just as intelligent."

    But I do think that Juliet can and should assert that all the races are intellectually equal. She has ample justification for believing it, and indeed I'd say she knows it to be the case. If Timothy utters some racist nonsense, Juliet violates no important norm of assertion if she corrects Timothy by saying, "No, the races are intellectually equal. Here's the evidence...."

    Suppose Tim responds by saying something like, "Hey, I know you don't really or fully believe that. I've seen how to react to your Black students and others." Juliet can rightly answer: "Those details of my particular psychology are irrelevant to the question. It is still the case that all the races are intellectually equal." Juliet has failed to shape herself into someone who generally lives and thinks and reasons, on the whole, as someone who believes it, but this shouldn't compel her to silence or compel her to always add a self-undermining confessional qualification to such statements ("P, but admittedly I don't live that way myself"). If she wants, she can just baldly assert it without violating any norm constitutive of good assertion practice. Her assertion has not gone wrong in a way that an assertion goes wrong if it is false or unjustified or intentionally misleading.

    Jennifer Lackey (2007) presents some related cases. One is her well-known creationist teacher case: a fourth-grade teacher who knows the good scientific evidence for human evolution and teaches it to her students, despite accepting the truth of creationism personally as a matter of religious faith. Lackey uses this case to argue against the Knowledge Norm of Assertion, as well as (in passing) against a Belief Norm of Assertion, in favor of a Reasonable-To-Believe Norm of Assertion.

    I like the creationist teacher case, but it's importantly different from the case of Juliet. Juliet feels unambivalently committed to the truth of what she asserts; she feels no doubt; she confidently judges it to be so. Lackey's creationist teacher is not naturally read as unambivalently committed to the evolutionary theory she asserts. (Similarly for Lackey's other related examples.)

    Also, in presenting the case, Lackey appears to commit to the Entailment Thesis (p. 598: "he does not believe, and hence does not know"). Although it is minority opinion in the field, I think it's not outrageous to suggest that both Juliet and the creationist teacher do know the truth of what they assert (cf. the geocentrist in Murray, Sytsma & Livengood 2013). If the creationist teacher knows but does not believe, then her case is not a counterexample to the Knowledge Norm of Assertion.

    A related set of cases -- not quite the same, I think, and introducing further complications -- are ethicists who espouse ethical views without being much motivated to try to govern their own behavior accordingly.

    [image from Helen De Cruz]