The three faces of philosophy

[Adapted from a post initially published at Talking Philosophy Magazine blog in 2014.]

Philosophy is a big tent kind of thing. There is a world of difference between being philosophical,  being a proper philosopher, and being a professional philosopher. The first is an action; the second, a kind of vocation; and the third, a description of an academic job.

As far as I can tell, the practice of doing philosophy is intimately related to the state of being philosophical.  To do philosophy is to engage in the rational study of some characteristically general subjects (e.g., morality, existence, art, reasoning), for the purpose of increasing understanding and reducing confusion. In the ideal case, being philosophical involves manifesting certain virtues: you must have the right intentions (insightful belief, humble commitments), and you must proceed using a reflective skill-set (rationality in thought, cooperation in conversation). The bare requirement for being philosophical – even when you do it badly – is that you should be able to manifest at least some of right intentions and at least some of the right ways.

It is possible to do philosophy without being a proper philosopher or a professional philosopher. This is unusual, as these things go; to see that, compare with engineering. The requirements for doing actual philosophy are quite a bit lower than the requirements for doing actual engineering. To do philosophy you have to approach some of the general questions while behaving philosophically; to do engineering, you have to be a proper engineer. So, it is seldom claimed that Meno was a proper philosopher, but we won’t hesitate to say that Meno was seriously doing philosophy with Socrates. In contrast, professional engineers would probably not say that a child playing with Lego has really seriously done some engineering. (Not that there’s anything wrong with Lego. If it came to that, I’d be more inclined to say there’s something wrong with engineers.)

And yet, in the vocation of philosophy, there are unusually high barriers to success. A person who does philosophy in a middling way is not a proper philosopher; if you can describe her philosophizing in a cheap metaphor, it is a sign that things may have fallen short of the mark. Proper philosophers do productive work that is worthy of attention, however you would like to cash that out.

Moreover, I would argue that the merits of a work in professional philosophy are only obliquely defined in terms of their vocational traits. Professional philosophers are judged according to various things, including their scholarly competence, their intelligence, their papers, peers, prudence, and pedigree. By and large, professional philosophers are not directly tested on whether or not they have philosophical acumen. Indeed, it is rarely stated outright what ‘being philosophical’ amounts to, uneasily marked by opaque approbative terms (which, following Amy Olberding, we might dub ‘top-notchitude’). When you ask professional philosophers to articulate their conceptions of good philosophy, it is sometimes asserted that the professional desiderata overlap substantially with the philosophical traits. And I think there is something to that. But at their worst, professionals will float blissfully along from one encounter to the next operating on the assumption that whatever they are up to is all aces, and good riddance to the rest of the profession. (Consider that in certain areas, professional citation practices are remarkably ad hoc; and consider that most articles are cited only once or less even when published). Beneath the wandering skies of top-notchitude, we have the shifting sands of the documentary record which ostensibly makes up the bulk of this field’s productive output. So there is at least some room for someone who is committed to philosophy as a vocation to look at the profession with a skeptical eye.

But despite the fact that philosophy can be discussed in any of these modes – as proper (the vocation), as professional (the job description), and as philosophizing (the act) – it is instructive to notice that they share certain commonalities. At one end of the spectrum, proper philosophers should be seen to hold the four virtues; and at the other end, the worst professional philosophy is evaluated in terms of tropes that imply some one or more of these virtues are out of sync. Whatever else we think about philosophy and its fate, we should not be lulled into an identity crisis. I say, again, that philosophy is best understood as the kind of projects and habits it encourages and cultivates in us, and which makes us better directed towards making sense of things. This is something to hold onto, something worth protecting, come what may.

Against warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

There are two popular ways of responding to criticism you dislike. One is to smile serenely and say, “You’re entitled to your opinion.” This utterance often produces the sense that all parties are faultless in their disagreement, and that no-one is rationally obligated to defer to anyone else. Another is deny that your critic is has any entitlement to their opinion since they are in the wrong social position to make a justifiable assertion about some matters of fact (either because they occupy a position of relative privilege or a position of relative deprivation). Strong versions of this approach teach us that it is rational to defer to people just by looking at their social position.

A third, more plausible view is that if we want to make for productive debate, then we should talk about what it generally takes to get along. e.g., perhaps we should obey norms of respect and kindness towards each other, even when we disagree (else run the risk of descending into babel). But even this can’t be right, since mere disagreement with someone when it comes to their vital projects (that is, the things they identify with) shall always count as disrespect. If someone has adopted a belief in young earth creationism as a vital life project, and I offer a decisive challenge to that view, and they do not regard this as disrespectful, then they have not understood what has been said. (I cannot say “I disrespect your belief, but respect you,” when I full well understand that the belief is something that the person has adopted as a volitional necessity.) Hence, while it is good to be kind and respectful, and I may even have a peculiar kind of duty to be kind and respectful to the extent that it is within my powers and purposes. But people who have adopted vital life projects of that kind have no right to demand respect from me insofar as I offer a challenge to their beliefs, and hence to them as practical agents. Hence the norm of respectfulness can’t guide us, since it is unreasonable to defer in such cases. At least on a surface level, it looks like we have to have a theory of warranted deference in order to explain how that is.

For what it’s worth, I have experience with combative politics, both in the form of the politics of a radically democratic academic union and as a participant/observer of the online skeptic community. These experiences have given me ample — and sometimes, intimate — reasons to believe that these norms have the effect of trivializing debate. I think that productive debate on serious issues is an important thing, and when done right it is both the friend and ally of morality and equity (albeit almost always the enemy of expedient decision making, as reflected amusingly in the title of Francesca Polletta’s linked monograph).

***

A few months ago, one of TPM’s bloggers developed a theory which he referred to as a theory of warranted deference. The aim of the theory was to state the general conditions when we are justified in believing that we are rationally obligated to defer to others. The central point of the original article was to argue that our rational norms ought to be governed by the principle of dignity. By the principle of dignity, the author meant the following Kant-inspired maxim: “Always treat your interlocutor as being worthy of consideration, and expect to be treated in the same way.” One might add that treating someone as worthy of consideration also entails treating them as worthy of compassion.

Without belaboring the details, the upshot of the theory is that you are rational in believing that you have a [general] obligation to defer to the opinions of a group as a whole only when you’re trying to understand the terms of their vocabulary. And one important term that the group gets to define for themselves is the membership of the group itself. According to the theory, you have to defer to the group as a whole when you’re trying to figure out who counts as an insider.

Here’s an example. Suppose Bob is a non-physicist. Bob understands the word ‘physicist’ to mean someone who has a positive relationship to the study of physics. Now Bob is introduced to Joe, who is a brilliant amateur who does physics, and who self-identifies as a physicist. The question is: what is Joe, and how can Bob tell? Well, the approach from dignity tells us that Bob is not well-placed to say that Joe is a physicist. Instead, the theory tells us that Bob should defer to the community of physicists to decide what Joe is and what to call him.

***

I wrote that essay. In subsequent months, a colleague suggested to me that the theory is subject to a mature and crippling challenge. It now seems to me that the reach of the theory has exceeded its grasp.

If you assume, as I did, that any theory of warranted deference must also provide guidance on when you ought to defer on moral grounds, then the theory forces you to consider the dignity of immoral persons. e.g., if a restaurant refuses to serve potential customers who are of a certain ethnicity, then the theory says that the potential customer is rationally obligated to defer to the will of the restaurant.

But actually, it seems more plausible to say that nobody is rationally obligated to defer to the restaurant, for the following reason. If there is some sense in which you are compelled to defer in that situation, it is only because you’re compelled to do so on non-moral grounds. In that situation, it is obvious that there are no moral obligations to defer to the restaurant owners on the relevant issue; if anything, there are moral obligations to defy them on that issue, and one cannot defer to someone on something when they are in a state of defiance on that issue. Finally, if you think that moral duties provide overriding reasons for action in this case, then any deference to the restaurant is unwarranted.

Unfortunately, the principle of dignity tells you the opposite. Hence, the principle of dignity can be irrational. And hence, it is not a good candidate as a general theory of rational deference.

So perhaps, as some commenters (e.g., Ron Murphy) have suggested, the whole project is misguided.

It now occurs to me that instead of trying to lay out the conditions where people are warranted to defer, I ought to have been thinking about the conditions under which it is unwarranted to do so. It seems that the cases I find most interesting all deal with unwarranted deference: we are not warranted in deferring to Joe about who counts as a physicist, and the Young Earth Creationist is not warranted in demanding that I defer to them about Creationism.

What I Saw in Athens [Repost]

Note: This is a post I originally put up at TPM online here.

The World Congress of Philosophy concluded this past Saturday in Athens. This year’s themecropped-p8033147.jpg was Philosophy as Inquiry and Way of Life. It’s a theme that is tailored to the strengths of the event. For any who are interested in seeing how philosophy is a living and global practice, the Congress is essential. This year’s Congress was also host of a significant number of Big Name Philosophers, and hence was also an attraction for philosophers whose interests are more provincially-minded.

While there were plenty of interesting talks that are worth reporting on (both good and not so good), I would prefer to take a moment to make a few personal remarks about what I saw in Athens. Hat tip to commenters at Feminist Philosophers for the idea and encouragement.

***

I arrive on Saturday. It is hot and arid. Looking out of my hotel window, I am at first startled by the view. The landscape looks like an overexposed photograph. The buildings are crumbling and saturated with graffiti.OLYMPUS DIGITAL CAMERA

Greek society is in turmoil, their government put under administration. An unhinged neo-Nazi party known as the Golden Dawn is gaining power and popularity. One of my fellow speakers at Congress tells me about Operation Zeus, a heavy-handed effort to jail ostensibly undocumented migrants at detention centres. Heavily armed officers are stationed near tourist havens and government buildings.

I decide to take a walk. It’s not until I am a few blocks away from my hotel that I notice the barking. I turn around and see that a dog has followed me all the way along my journey. The dog looks as though she is barking at any pedestrians who get too close to me. When I turn to go back to the hotel, the dog races back to reassume her place across the road, presumably to keep watch. My little protector.

The week is beautiful. The hotel is nice, and I feel reasonably safe. The people of Greece are1146619_10101228115948171_1815312156_n down-to-earth, and Athens glows at night. I see the Acropolis and the temple of Apollo up close. I swallow salt water from the Aegean Sea and wash it down with iced coffee. I am genuinely happy.

Somewhere along the way, I overhear a little girl say, “It’s hot and like a dream.” I know what she means.

But even the best of dreams have a nightmarish quality to them. The people of Greece 969578_10101229249845831_1905562480_nare understandably angry, and self-aware about their anger. Most cab drivers have harsh things to say about Germany and Angela Merkel. There is also no shortage of acrimony about their own Euro-imposed government, and plenty reserved for the socialist government that led them into the collapse. (As one cab driver who spoke virtually no English memorably repeated: “Boo to Papandreou“.) The people suffer and depend on tourists with Euros.

***

Friday evening. About forty professional philosophers were traipsing merrily around the ruins of the Lyceum. While moseying around the ruins my eye caught a hold of a black rock. I picked it up and cleaned off the grass and dirt. It was thin, long, with a concave blackened surface. The edge had the colour of clay. A shard of ancient pottery.

994263_10101239024806731_720043914_n

We should not have been allowed to walk in the pit. There should have been velvet ropes and armed guards and signs, but for whatever reason — and whatever the consequences — we were allowed to walk the grounds.

Standing there in the 34 degree heat, in the dust, listening to cicadas and sprinklers and the bustling of Athens in the background. Eventually, my new Greek friend forces me to return it to the dirt. But for a moment I was immobile, transfixed. It felt right to hold onto that little bit of history as long as I could.

The sound of an exasperated voice over the speaker system is enough to break my from the reverie. “Please don’t step on the ancient wall,” a droll voice says to some naughty wanderer.

***

I get out of the cab into the heat, clad in a white Canadian hat and a World Congress of Philosophy lanyard around my neck. I look up at the impassive but modern-looking government building — the Kentrikou detention centre. It appears deserted. A few towels hang from the windows, but otherwise it is devoid of life.

Then I pull out my camera and start taking pictures of the empty exterior. At that point a policeman appears out of nowhere and asks me what I’m doing.

I tell him I’m interested in seeing the migrants in the facility. I say I’m writing a story about how Greece is handling the austerity crisis. The guard smiles. “Greece is on fire,” he says. I’m not sure he is referring to the weather.

He radios up and asks permission to let me in, and I am denied entry.1151066_10101241178695321_557036549_n

Just then, I look up and see some arms moving in one of the windows. I carefully step back into the street, onto public space, and snap some photos. In the first photo, it looks as though a detainee is showing me a card of some kind. Two faces emerge from behind bars, both visibly happy for my attention.

The fact that I have taken photos of actual detainees seems to have changed the parameters of the situation. At that point, the guard says: “Wait just a moment. Someone is coming to see you to take you upstairs.”
Sure enough, a burly Greek comes down. His hand is on the butt of his pistol. He exchanges words with the guard. Eventually they decide that I’m not a terrorist, and I’m told to follow the burly Greek. I’m led inside. I pull out my camera to take some interior shots, and am immediately told to put it away: “This is a military facility.”

Inside, I meet some bureaucrats who are watching television. I notice little things: a shitty photocopier, a pile of traffic cones. They ask me for my papers. I give them my Canadian driver’s license.

While they decide what to do with me, I’m led into a dirty white room. The room is bare, apart from a table, some benches, and a desk for the cop in charge. There is measuring tape on the wall and handprints all over the wall behind me. I figure that it is the processing area where migrants have their fingerprints taken.1150200_10101241185980721_228518603_n

Not liking the direction in which matters were headed, I quietly removed the microchip from my digital camera and hid it in my pocket. Just in case they decide to start confiscating my things.

Eventually I am led back to the bureaucrats. I am told that I need an appointment in order to interview any migrants. I am given a number to call to arrange an appointment. Then I am invited to leave.

I suppose I picked the right place to visit. Later that day, on the other side of the city, the Amygdaleza detention centre broke into a riot.

***

I saw my protector dog again that day. This time it was up close. Her eyes are bloodshot to the point where they look like they are bleeding. She lay in the street baking in the hot sun. I pour some water for her, and she doesn’t move. I worry that she might be dying.

On warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

By their nature, skeptics have a hard time deferring. And they should. One of the classic (currently undervalued) selling points for any course in critical thinking is that it grants people an ability to ratchet down the level of trust that they place in others when it is necessary. However, conservative opinion to the contrary, critical thinkers like trust just fine. We only ask that our trust should be grounded in good reasons in cooperative conversation.

Here are two maxims related to deference that are consistent with critical thinking:

(a) The meanings of words are fixed by authorities who are well informed about a subject. e.g., we defer to the international community of astronomers to tell us what a particular nebula is called, and we defer to them if they should like to redefine their terms of art. On matters of definition, we owe authorities our deference.

(b) An individual’s membership in the group grants them prime facie authority to speak truthfully about the affairs of that group. e.g., if I am speaking to physicists about their experiences as physicists, then all other things equal I will provisionally assume that they are better placed to know about their subject than I am. The physicist may, for all I know, be a complete buffoon. (S)he is a physicist all the same.

These norms strike me as overwhelmingly reasonable. Both follow directly from the assumption that your interlocutor, whoever they are, deserve to be treated with dignity. People should be respected as much as is possible without doing violence to the facts.

Here is what I take to be a banal conclusion:

(c) Members of group (x) ought to defer to group (y) on matters relating to how group (y) is defined. For example, if a philosopher of science tells the scientist what counts as science, then it is time to stop trusting the philosopher.

It should be clear enough that (c) is a direct consequence of (a) and (b).

Here is a claim which is a logical instantiation of (c):

(c’) Members of privileged groups ought to defer to marginalized groups on matters relating to how the marginalized group is defined. For example, if a man gives a woman a lecture on what counts as being womanly, then the man is acting in an absurd way, and the conversation ought to end there.

As it turns out, (c’) is either a controversial claim, or is a claim that is so close to being controversial that it will reliably provoke ire from some sorts of people.

But it should not be controversial when it is understood properly. The trouble, I think, is that (c) and (c’) are close to a different kind of claim, which is genuinely specious:

(d) Members of group (x) ought to defer to group (y) on any matters relating to group (y).

Plainly, (d) is a crap standard. I ought to trust a female doctor to tell me more about my health as a man than I trust myself, or my male barber. The difference between (d) and (c) is that (c) is about definitions (‘what counts as so-and-so’), while (d) is about any old claim whatsoever. Dignity has a central place when it comes to a discussion about what counts as what — but in a discussion of bare facts, there is no substitute for knowledge.

**

Hopefully you’ve agreed with me so far. If so, then maybe I can convince you of a few more things. There are ways that people (including skeptics) are liable to screw up the conversation about warranted deference.

First, unless you are in command of a small army, it is pointless to command silence from people who distrust you. e.g., if Bob thinks I am a complete fool, then while I may say that “Bob should shut up and listen”, I should not expect Bob to listen. I might as well give orders to my cat for all the good it will do.

Second, if somebody is not listening to you, that does not necessarily mean you are being silenced. It only means you are not in a position to have a cooperative conversation with them at that time. To be silenced is to be prevented from speaking, or to be prevented from being heard on the basis of perverse non-reasons (e.g., prejudice and stereotyping).

Third, while intentionally shutting your ears to somebody else is not in itself silencing, it is not characteristically rational either. The strongest dogmatists are the quietest ones. So a critical thinker should still listen to their interlocutors whenever practically possible (except, of course, in cases where they face irrational abuse from the speaker).

Fourth, it is a bad move to reject the idea that other people have any claim to authority, when you are only licensed to point out that their authority is narrowly circumscribed. e.g., if Joe has a degree in organic chemistry, and he makes claims about zoology, then it is fine to point out the limits of his credentials, and not fine to say “Joe has no expertise”. And if Petra is a member of a marginalized group, it is no good to say that Petra has no knowledge of what counts as being part of that group. As a critical thinker, it is better to defer.

To thine own self be [tpm]

Daniel Little leads double-life as one of the world’s most prolific philosophers of social science and author of one of the snazziest blogs on my browser start-up menu. Recently, he wrote a very interesting post on the subject of authenticity and personhood.

In that post, Little argues that the very idea of authenticity is grounded in the idea of a ‘real self’. “When we talk about authenticity, we are presupposing that a person has a real, though unobservable, inner nature, and we are asserting that he/she acts authentically when actions derive from or reflect that inner nature.” For Little, without the assumption that people have “real selves” (i.e., a set of deep characteristics that are part of a person’s inner constitution), “the idea of authenticity doesn’t have traction”. In other words: Little is saying that if we have authentic actions, then those actions must issue from our real selves.

However, Little does not think that the real self is the source of the person’s actions. “…it is plausible that an actor’s choices derive both from features of the self and the situation of action and the interplay of the actions of others. So script, response, and self all seem to come into the situation of action.”

So, by modus tollens, Little must not think there is any such thing as authentic actions.

But —- gaaah! That can’t be right! It sure looks like there is a difference between authentic and inauthentic actions. When a homophobic evangelical turns out to be a repressed homosexual, we are right to say that their homophobia was inauthentic. When someone pretends to be an expert on something they know nothing about, they are not being authentic. When a bad actor is just playing their part, Goffman-style: not authentic.

So one of the premises has to go. For my part, I would like to take issue with Little’s assertion that the idea of authenticity “has no traction” if there is no real self. I’d like to make a strong claim: I’d like to agree that the idea of a ‘real self’ is an absurdity, a non-starter, but that all the same, there is a difference between authentic and inauthentic actions. Authenticity isn’t grounded in a ‘real (psychological) self’ — instead, it’s grounded in a core self, which is both social and psychological.

46899_10101009240805711_178777880_n

If you ever have a chance to wander into the Philosophy section at your local bookstore you’ll find no shortage of books that make claims about the Real Self. A whole subgenre of the philosophy of the ‘true self’ is influenced by the psychodynamic tradition in psychology, tracing back to the psychoanalyst D.W. Winnicott.

For the Freudians, the psyche is structured by the libido (id), which generates the self-centred ego and the sociable superego. When reading some of the works that were inspired by this tradition, I occasionally get the impression that the ‘real self’ is supposed to be a secret inner beast that lies within you, waiting to surface when the right moment comes. That ‘real self’ could be either the id, or the ego.

On one simplistic reading of Freud, the id was that inner monstrosity, and the ego was akin to the ‘false self’.* On many readings, Freud would like to reduce us all to a constellation of repressed urges. Needless to say (I hope), this reductionism is batty. You have to be cursed with a comically retrograde orientation to social life to think that people are ultimately just little Oedipal machines.

Other theorists (more plausibly) seem to want to say that the ego is hidden beneath the superego — as if the conscience were a polite mask, and the ego were your horrible true face. But I doubt that the ego counts as your ‘real self’, understood in that way. I don’t think that the selfish instincts operate in a quasi-autonomous way from the social ones, and I don’t think we have enough reason to think that the selfish instincts are developmentally prior to the selfish ones. Recent research done by Michael Tomasello has suggested that our pro-social instincts are just as basic and natural as the selfish ones. If that is right, then we can’t say that the ego is the ‘real self’, and the superego is the facade.

222642_10101008464636161_693606489_n

All the same, we ought to think that there is such a thing as an ‘authentic self’. After all, it looks as though we all have fixed characteristics that are relatively stable over time, and that these characteristics reliably ground our actions in a predictable way. I think it can be useful, and commonsensical, to understand some of these personality traits as authentic parts of a person’s character.

On an intuitive level, there seem to be two criteria for authenticity which distinguish it from inauthentic action. First, drawing on work by Harry Frankfurt, we expect that authenticity should involve wholeheartedness — which is a sense of complacency with certain kinds of actions, beliefs, and orientation towards states of affairs. Second, those traits should be presented honestly, and in line with the actual beliefs that the actor has about the traits and where they come from. And notice that both of these ideas, wholeheartedness and honesty, make little or no allusion to Freudian psychology, or to a mysterious inner nature.

So the very idea of authenticity is both a social thing and a psychological thing, not either one in isolation. It makes no sense to talk about authentic real self, hidden in the miasma of the psyche. The idea is that being authentic involves doing justice to the way you’re putting yourself forward in social presentation as much as it involves introspective meditation on what you want and what you like.

By assuming that the authentic self is robustly non-social (e.g., something set apart from “responses” to others), we actually lose a grip on the very idea of authenticity. The fact is, you can’t even try to show good faith in putting things forward at face value unless you first assume that there is somebody else around to see it. Robinson Crusoe, trapped on a desert island, cannot act ‘authentically’ or ‘inauthentically’. He can only act, period.

So when Little says that “script, response, and self all seem to come into the situation of action”, I think he is saying something true, but which does not bear on the question of whether or not some action is authentic. To act authentically is to engage in a kind of social cognition. Authenticity is a social gambit, an ongoing project of putting yourself forward as a truth-teller, which is both responsive to others and grounded in projects that are central to your concern.

In this sense, even scripted actions can be authentic. “I love you” is a trope, but it’s not necessarily pretence to say it. [This possibility is mentioned at the closing of Little’s essay, of course. I would like to say, though: it’s more than just possible, it’s how things really are.]

* This sentence was substantially altered after posting. Commenter JMRC, below, pointed out that it is probably not so easy to portray Freud in caricature.

Gun Rights and Tyranny [tpm]

[Originally posted at Talking Philosophy Magazine blog]

I’d like to present a quick little philosophical coda to Mike’s latest post on gun rights and tyranny by outlining a difficult puzzle.

Consider the following propositions:

1. A state is any organization that successfully upholds the possession of a monopoly on the legitimate use of force.

2. It is legitimate to defend against tyranny by the use of force.

Both premises look to be pretty plausible. The first is Max Weber’s definition of the state, which is widely influential. The second is a commonsense construal of the Second Amendment, once you formulate it in a way that is consistent with the Constitution [and other founding documents].

But what follows from these two premises? Well, anyone who makes a legitimate claim to the use of force, and who is not a part of the government or acting as a party to its laws, cannot help but be seeking to disrupt the state’s monopoly on the use of force. Hence, those who recognize the validity of this commonsense reading of second amendment are de facto advocates of vigilantism. Even if you are a centrist or left-libertarian who advocates gun control, so long as you recognize (2) is a plausible reading of the constitution, you are stuck moonlighting as an advocate for vigilantism. This is remarkable.

Obviously, many of us do not want to come to that conclusion. So there must be something wrong with one or both of these premises. Perhaps (1) is a vulgar statist formulation which pretends that ‘legitimacy’ equals morally rightness. So you might think that the difference between (1) and (2) trades on an ambiguity in the meaning of the term ‘legitimate’. But this critique does not seem destined for success. ‘Legitimacy’ seems to be a non-moral normative phrase, meaning something like, ‘is commonly recognized to hold a certain status’.

It’s a distressing and difficult puzzle, made all the more frustrating by the fact that it is so easy to formulate. Needless to say, quite a bit rides on the answer to the question. But whatever the answer is, the first step in a good conversation is for everybody to recognize a problem as a problem.

Carpe Diem & the Longer Now [tpm]

[Originally posted at Talking Philosophy Magazine blog]

So here’s the thing: I like utilitarianism. No matter what I do, no matter what I read, I always find that I am stuck in a utility-shaped box. (Here’s one reason: it is hard for me to applaud moral convictions if they treat rights as inviolableeven when the future of the right itself is at stake.) But trapped in this box as I am, sometimes I put my ear to the wall and hear what people outside the box are doing. And the voices outside tell me that utilitarianism is alienating and overly demanding.

I’m going to argue that act-utilitarianism is only guilty of these things if fatalism is incorrect. If fatalism is right, then integrity is nothing more than the capacity to make sense of a world when we are possessed with limited information about the consequences of actions. If I am right, then integrity does not have any other role in moral deliberation.

~

Supposedly, one of the selling points of act-utilitarianism is that it requires us to treat people impartially, by forcing us to examine a situation from some third-person standpoint and apply the principle of utility in a disinterested way. But if it were possible to do a definitive ‘moral calculus’, then we would be left with no legitimate moral choices to make. Independent judgment would be supplanted with each click of the moral abacus. It is almost as if one would need to be a Machiavellian psychopath in order to remain so impartial.

One consequence of being robbed of legitimate moral alternatives is that you might be forced to do a lot of stuff you don’t want to do. For instance, it looks as though detachment from our integrity could force us to into the squalor of excessive altruism, where we must give away anything and everything we own and hold dear. Our mission would be to maximize utility by doing works in such a way that would keep our own happiness above some subsistence minimum, and improve the lot of people who are far away. Selfless asceticism would be the order of the day.

In short, it seems like act-utilitarianism is a sanctimonious schoolteacher, that not only overrides our capacity for independent moral judgment, but also obliges us to sacrifice almost all of our more immediate interests for interests that are more remote — the people of the future, and the people geographically distant.

Friedrich Nietzsche, Samuel Scheffler, Bernard Williams: here are some passionate critics who have argued against utility in the above-stated ways. And hey, they’re not wrong. The desire to protect oneself, one’s friends, and one’s family from harm cannot simply be laughed away. Nietzsche can always be called upon to provide a mendacious quote: “You utilitarians, you, too, love everything useful only as a vehicle for your inclinations; you, too, really find the noise of its wheels insufferable?”

Well, it’s pretty clear that at least one kind of act-utilitarianism has noisy wheels. One might argue that nearer goods must be considered to have equal value as farther goods; today is just as important as tomorrow. When stated as a piece of practical wisdom, this makes sense; grown-ups need to have what Bentham called a ‘firmness of mind’, meaning that they should be able to delay gratification in order to find the most happiness out of life. But a naive utilitarian might take this innocent piece of wisdom and twist it into a pure principle of moral conduct, and hence produce a major practical disaster.

Consider the sheer number of things you need to do in order to make far-away people happy. You need to clamp down on all possible unintended consequences of your actions, and spend the bulk of your time on altruistic projects. Now, consider the limited number of things you can do to make a small number of people happy who are closest to you. You can do your damnest to seize the day, but presumably, you can only do so much to make your friends and loved ones happy without making yourself miserable in the process. So, all things considered, it would seem as though the naive utilitarian has to insist that we all turn into slaves to worlds that are other than our own — the table is tilted in the favor of the greatest number. We would have to demand that we give up on today for the sake of the longer now.

But that’s not to say that the utilitarian has been reduced to such absurdities. Kurt Baier and Henry Sidgwick are two philosophers that have explicitly defended a form of short-term egoism, since individuals are better judges of their own desires. Maybe utilitarianism isn’t such an abusive schoolteacher after all.

Why does act-utilitarianism seem so onerous? Well, if you’ve ever taken an introductory ethics class, you’re going to hear some variation on the same story. First you’ll be presented with a variety of exotic and implausible scenarios, involving threats to the wellbeing of conspecifics that are caught in a deadly Rube Goldberg machine (involving trolleys, organ harvesting, fat potholers, ill-fated hobos, etc.) When the issue is act-utilitarianism, the choice will always come down to two options: either you kill one person, or a greater number of others will die. In the thought-experiment, you are possessed with the power to avert disaster, and are by hypothesis acquainted with perfect knowledge of the outcomes of your choices. You’ll then be asked about your intuitions about what counts as the right thing to do. Despite all the delicious variety of these philosophical horror stories, there is always one constant: they tell you that you are absolutely sure that certain consequences will follow if you perform this-or-that action. So, e.g., you know for sure that the trolley will kill the one and save the five, you know for sure that the forced transplant of the Hobo’s organs will save the souls in the waiting room (and that the police will never charge you with murder), and so on.

This all sounds pretty ghoulish. And here’s the upshot: it is not intuitively obvious that the right answer in each case is to kill the one to save the five. It seems as though there is a genuine moral choice to be made.

Yet when confronted with such thought-experiments, squadrons of undergraduates have moaned: ‘Life is not like this. Choices are not so clear. We do not know the consequences.’ Sophomores are in a privileged position to see what has gone wrong with academic moralizing, since they are able to view the state of play with fresh eyes. For it is a morally important fact about the human condition that we don’t know much about the future. By imagining ourselves in a perfect state of information, we alienate ourselves from our own moral condition.

Once you see the essential disconnect between yourself and your hypothetical actor in the thought-experiment, blinders ought to fall from your eyes. It is true that I may dislike pulling the switch to change the trolley’s track, but my moral feelings should not necessarily bear on the question of what my more perfect alternate would need to do. Our so-called ‘moral intuitions’ only make a difference to the actual morality of the matter on the assumption that our judgments can reliably track the intuitions of your theoretical alternate — assuming your alternate knows the things they know, right on down to the bone. But then, this assumption is a thing that needs to be argued for, not assumed.

While we know a lot about what human beings need, our most specific knowledge about what people want is limited to our friends and intimates. That knowledge makes the moral path all the more clear. When dealing with strangers, the range of moral options is much wider than the range of options at home; after all, people are diverse in temperament and knowledge, scowl and shoe size. Moral principles arise out of uncertainty about the best means of achieving the act-utilitarian goal. Strike out uncertainty about the situation, and the range of your moral choices whittle down to a nub.

So if we had perfect information, then there is no doubt that integrity should go by the boards. But then, that’s not the fault of act-utilitarianism. After all, if we knew everything about the past and the future, then any sense of conscious volition would be impossible. This is just what fatalism tells us: free will and the angst of moral choice are byproducts of limited information, and without a sense of volition the very idea of integrity could not even arise.

Perhaps all this fatalism sounds depressing. But here’s the thing — our limited information has bizarrely romantic implications for us, understood as the creatures we are. For if we truly are modest in our ability to know and process information, and the rest of the above holds, then it is absurd to say, as Nietzsche does, that “whatever is done from love always occurs beyond good and evil”. It is hard to conceive of a statement that could be more false. For whatever is done from love, from trust and familiarity, is the clearest expression of both good and evil.

~

Look. Trolley-style thought-experiments do not show us that act-utilitarianism is demanding. Rather, they show us that increased knowledge entails increased responsibility. Since we are the limited sorts of creatures that we are, we need integrity, personal judgment, and moral rules to help us navigate the wide world of moral choice. If the consequentialist is supposed to be worried about anything, the argument against them ought to be that we need the above-stated things for reasons other than as a salve to heal the gaps in what we know.