Against warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

There are two popular ways of responding to criticism you dislike. One is to smile serenely and say, “You’re entitled to your opinion.” This utterance often produces the sense that all parties are faultless in their disagreement, and that no-one is rationally obligated to defer to anyone else. Another is deny that your critic is has any entitlement to their opinion since they are in the wrong social position to make a justifiable assertion about some matters of fact (either because they occupy a position of relative privilege or a position of relative deprivation). Strong versions of this approach teach us that it is rational to defer to people just by looking at their social position.

A third, more plausible view is that if we want to make for productive debate, then we should talk about what it generally takes to get along. e.g., perhaps we should obey norms of respect and kindness towards each other, even when we disagree (else run the risk of descending into babel). But even this can’t be right, since mere disagreement with someone when it comes to their vital projects (that is, the things they identify with) shall always count as disrespect. If someone has adopted a belief in young earth creationism as a vital life project, and I offer a decisive challenge to that view, and they do not regard this as disrespectful, then they have not understood what has been said. (I cannot say “I disrespect your belief, but respect you,” when I full well understand that the belief is something that the person has adopted as a volitional necessity.) Hence, while it is good to be kind and respectful, and I may even have a peculiar kind of duty to be kind and respectful to the extent that it is within my powers and purposes. But people who have adopted vital life projects of that kind have no right to demand respect from me insofar as I offer a challenge to their beliefs, and hence to them as practical agents. Hence the norm of respectfulness can’t guide us, since it is unreasonable to defer in such cases. At least on a surface level, it looks like we have to have a theory of warranted deference in order to explain how that is.

For what it’s worth, I have experience with combative politics, both in the form of the politics of a radically democratic academic union and as a participant/observer of the online skeptic community. These experiences have given me ample — and sometimes, intimate — reasons to believe that these norms have the effect of trivializing debate. I think that productive debate on serious issues is an important thing, and when done right it is both the friend and ally of morality and equity (albeit almost always the enemy of expedient decision making, as reflected amusingly in the title of Francesca Polletta’s linked monograph).

***

A few months ago, one of TPM’s bloggers developed a theory which he referred to as a theory of warranted deference. The aim of the theory was to state the general conditions when we are justified in believing that we are rationally obligated to defer to others. The central point of the original article was to argue that our rational norms ought to be governed by the principle of dignity. By the principle of dignity, the author meant the following Kant-inspired maxim: “Always treat your interlocutor as being worthy of consideration, and expect to be treated in the same way.” One might add that treating someone as worthy of consideration also entails treating them as worthy of compassion.

Without belaboring the details, the upshot of the theory is that you are rational in believing that you have a [general] obligation to defer to the opinions of a group as a whole only when you’re trying to understand the terms of their vocabulary. And one important term that the group gets to define for themselves is the membership of the group itself. According to the theory, you have to defer to the group as a whole when you’re trying to figure out who counts as an insider.

Here’s an example. Suppose Bob is a non-physicist. Bob understands the word ‘physicist’ to mean someone who has a positive relationship to the study of physics. Now Bob is introduced to Joe, who is a brilliant amateur who does physics, and who self-identifies as a physicist. The question is: what is Joe, and how can Bob tell? Well, the approach from dignity tells us that Bob is not well-placed to say that Joe is a physicist. Instead, the theory tells us that Bob should defer to the community of physicists to decide what Joe is and what to call him.

***

I wrote that essay. In subsequent months, a colleague suggested to me that the theory is subject to a mature and crippling challenge. It now seems to me that the reach of the theory has exceeded its grasp.

If you assume, as I did, that any theory of warranted deference must also provide guidance on when you ought to defer on moral grounds, then the theory forces you to consider the dignity of immoral persons. e.g., if a restaurant refuses to serve potential customers who are of a certain ethnicity, then the theory says that the potential customer is rationally obligated to defer to the will of the restaurant.

But actually, it seems more plausible to say that nobody is rationally obligated to defer to the restaurant, for the following reason. If there is some sense in which you are compelled to defer in that situation, it is only because you’re compelled to do so on non-moral grounds. In that situation, it is obvious that there are no moral obligations to defer to the restaurant owners on the relevant issue; if anything, there are moral obligations to defy them on that issue, and one cannot defer to someone on something when they are in a state of defiance on that issue. Finally, if you think that moral duties provide overriding reasons for action in this case, then any deference to the restaurant is unwarranted.

Unfortunately, the principle of dignity tells you the opposite. Hence, the principle of dignity can be irrational. And hence, it is not a good candidate as a general theory of rational deference.

So perhaps, as some commenters (e.g., Ron Murphy) have suggested, the whole project is misguided.

It now occurs to me that instead of trying to lay out the conditions where people are warranted to defer, I ought to have been thinking about the conditions under which it is unwarranted to do so. It seems that the cases I find most interesting all deal with unwarranted deference: we are not warranted in deferring to Joe about who counts as a physicist, and the Young Earth Creationist is not warranted in demanding that I defer to them about Creationism.

On warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

By their nature, skeptics have a hard time deferring. And they should. One of the classic (currently undervalued) selling points for any course in critical thinking is that it grants people an ability to ratchet down the level of trust that they place in others when it is necessary. However, conservative opinion to the contrary, critical thinkers like trust just fine. We only ask that our trust should be grounded in good reasons in cooperative conversation.

Here are two maxims related to deference that are consistent with critical thinking:

(a) The meanings of words are fixed by authorities who are well informed about a subject. e.g., we defer to the international community of astronomers to tell us what a particular nebula is called, and we defer to them if they should like to redefine their terms of art. On matters of definition, we owe authorities our deference.

(b) An individual’s membership in the group grants them prime facie authority to speak truthfully about the affairs of that group. e.g., if I am speaking to physicists about their experiences as physicists, then all other things equal I will provisionally assume that they are better placed to know about their subject than I am. The physicist may, for all I know, be a complete buffoon. (S)he is a physicist all the same.

These norms strike me as overwhelmingly reasonable. Both follow directly from the assumption that your interlocutor, whoever they are, deserve to be treated with dignity. People should be respected as much as is possible without doing violence to the facts.

Here is what I take to be a banal conclusion:

(c) Members of group (x) ought to defer to group (y) on matters relating to how group (y) is defined. For example, if a philosopher of science tells the scientist what counts as science, then it is time to stop trusting the philosopher.

It should be clear enough that (c) is a direct consequence of (a) and (b).

Here is a claim which is a logical instantiation of (c):

(c’) Members of privileged groups ought to defer to marginalized groups on matters relating to how the marginalized group is defined. For example, if a man gives a woman a lecture on what counts as being womanly, then the man is acting in an absurd way, and the conversation ought to end there.

As it turns out, (c’) is either a controversial claim, or is a claim that is so close to being controversial that it will reliably provoke ire from some sorts of people.

But it should not be controversial when it is understood properly. The trouble, I think, is that (c) and (c’) are close to a different kind of claim, which is genuinely specious:

(d) Members of group (x) ought to defer to group (y) on any matters relating to group (y).

Plainly, (d) is a crap standard. I ought to trust a female doctor to tell me more about my health as a man than I trust myself, or my male barber. The difference between (d) and (c) is that (c) is about definitions (‘what counts as so-and-so’), while (d) is about any old claim whatsoever. Dignity has a central place when it comes to a discussion about what counts as what — but in a discussion of bare facts, there is no substitute for knowledge.

**

Hopefully you’ve agreed with me so far. If so, then maybe I can convince you of a few more things. There are ways that people (including skeptics) are liable to screw up the conversation about warranted deference.

First, unless you are in command of a small army, it is pointless to command silence from people who distrust you. e.g., if Bob thinks I am a complete fool, then while I may say that “Bob should shut up and listen”, I should not expect Bob to listen. I might as well give orders to my cat for all the good it will do.

Second, if somebody is not listening to you, that does not necessarily mean you are being silenced. It only means you are not in a position to have a cooperative conversation with them at that time. To be silenced is to be prevented from speaking, or to be prevented from being heard on the basis of perverse non-reasons (e.g., prejudice and stereotyping).

Third, while intentionally shutting your ears to somebody else is not in itself silencing, it is not characteristically rational either. The strongest dogmatists are the quietest ones. So a critical thinker should still listen to their interlocutors whenever practically possible (except, of course, in cases where they face irrational abuse from the speaker).

Fourth, it is a bad move to reject the idea that other people have any claim to authority, when you are only licensed to point out that their authority is narrowly circumscribed. e.g., if Joe has a degree in organic chemistry, and he makes claims about zoology, then it is fine to point out the limits of his credentials, and not fine to say “Joe has no expertise”. And if Petra is a member of a marginalized group, it is no good to say that Petra has no knowledge of what counts as being part of that group. As a critical thinker, it is better to defer.

Richard Rorty on truth, deference, and assertion

Rorty, Richard. “Putnam and the Relativist Menace,” Journal of Philosophy 1993 vol. XC (9) pp. 443

One of the more frenetic topics in contemporary epistemology is warranted assertability — i.e., what it is rational to put forward as an assertion. Much of the issue depends on what the whole point of an assertoric speech act is supposed to be, and whether or not the point of assertion can be articulated in terms of constitutive norms. Some folks like Timothy Williamson argue that you are only warranted in asserting things you know, since the whole point of assertion is to transfer knowledge. If you assert something you don’t know, then the listener is entitled to resent the assertion, and (presumably) it is also rational for you to be ashamed of having made the assertion. Others argue that this is a very high bar, and that it makes more sense to say that you might be warranted to just assert a reasonable belief. If you assert something as true, without actually knowing it is true, then it might not be rational for you to be ashamed of yourself, nor does it follow that others are entitled to resent you for what you’ve said.

What does Richard Rorty think? Rorty argues that you are only warranted in asserting something so long as what you say is acceptable in a linguistic community. “So all ‘a fact of the matter about whether p is a warranted assertion’ can mean is “a fact of the matter about our ability to feel solidarity with a community that views p as warranted.”” (p.452-453) Rorty argues that the conditions where it is warranted to assert are relative to how we feel about the views that would be held by an idealized version of our own community. That is the sense in which he’s a relativist. What you say in one speech community might be assertable, and what you say in another would be totally verboten. As far as Rorty is concerned, assertability is concept that belongs to sociology and not epistemology.

For Rorty, the meaning of “our community” or “our society” is determined by common ground. For example, he uses the term “wet liberalism” to describe the community that Rorty and Putnam share, as if the fact that they both belonged to the liberal tradition was what set them into the same community. (p.452) (I don’t think that it’s necessary for us to make reference to political ideology when we talk about “our linguistic community”, but it’s at least one candidate.) Whatever criterion you use to pick out the relevant linguistic community, there is a sense that you have got to be in solidarity with that community. (453-54) The upshot: for the purposes of making a rational assertion, you’ve first got to assume you’re part of a common trust.

Now for the weird, relativist twist: Rorty thinks truth is all about deference to the idealized community of future knowers. If you say, “Rutabegas are red”, then that claim is true just in case a future idealized version of yourself would say it too. So long as Rorty is concerned with the notion of truth, he thinks we are interested in whether or not an idealized future society of knowers would affirm or deny what we’ve said. (p.450) Truth is just a vague term of approbation, synonymous with truth; and, evidently, trust is the ultimate truth-maker.

The advice model of moral truth and meaning

I recently wrote the first (and, given the lack of interest, the only) instalment of a children’s story. The tale is meant to illustrate some basic ideas in meta-ethical theory in a fun and accessible way. However, on the face of it, I only make allusions to meta-ethics, and don’t really get explicit about what model of meta-ethical theory I am advocating.

But if you’re not satisfied by mere allusions, you can always hover over the images in the story. In this way, you get a few more details on what theories are being illustrated. Here (with some editing for clarity) is what it says, along with some references to who I’m drawing on. I’m advocating what you might think of as an ‘advice model’ of moral semantics and truth.

1. Cognitivism, not emotivism.The meaning of a moral sentence, like “Stealing is wrong”, is not as obvious as it looks on first glance. Let’s assume that some moral sentences are true, and hence that ‘error theory’ is wrong. What is it that makes them so?2. Existentialism, not realism. There are no spooky moral properties in the world. Hence, moral sentences do not directly refer.
3. Deference, not reference. It is plausible to believe that moral sentences are true or false depending on whether they are spoken with the right authority. Moral sentences are true or false depending upon whether or not the sentences felicitously defer to a moral authority.  [The irreducible sense of authority attached to moral claims is something I learned from H. Sidgwick’s Method of Ethics, though come to think of it, it probably owes more to Aristotle’s Nicomachean Ethics.]
4. Epistemically objective, not subjective. However, it is not always obvious who that authority might be. In case of uncertainty, we might be tempted to say that a moral sentence is true just in case it is uttered by an authority who is giving good advice. The authority of the speaker is determined, in part, by whether or not we have a justified sense that the authority making the claim knows that the advice shall lead to good consequences.  
5. The problem of egoism. But even if moral sentences were about giving good advice to achieve the best outcomes, it isn’t obvious what outcomes count as good ones. For example, a thief can always claim that the maxim, “Stealing is wrong”, does not lead to good consequences for him.  [The argument of the moral knave, of course, belongs to D. Hume.]
6. Grounded in orientation, not psychology. It is obvious that the victims of stealing are the ones who are suffering the consequences. The question is, how can you convince the thief that the suffering of his victims should be a reason not to steal? The answer, I think, is just that moral advice is addressed to a certain kind of audience: namely, people who have a pro-social orientation towards others. Anyone who lacks a pro-social orientation will not have the ability to understand what is said in a moral claim. And at this point, the thief faces a dilemma. If he thinks the moral claim is true, then you might say one of two things to him. a) On the one hand, you might say that the thief implicitly recognizes that the moral claim entails that there is a reason for action. b) And if he persists in recognizing that the moral claim is correct, but disagrees that this entails he has a reason for action, then you might use that as grounds to say that he is unable to understand the point of the moral claim after all. [The use of ‘orientation’ as a technical term is borrowed from R. Geuss, though I don’t know that ‘orientation’ has ever been used as a sui generis category used to block a reduction to beliefs and desires.]
7. Generated by psychology: desire. Admittedly, the connection between morality and the motivations of pro-social people is still pretty obscure. Even if everybody agrees that genuine moral claims provide a reason for action for pro-social people, that says almost nothing about what it takes for moral claims to be effective in bringing about an actual intention to act. If moral claims nominally provide a reason for action, but rarely or ever compel actions among pro-social people, then we might have reason to question whether they provide a reason after all. We need, in other words, to acknowledge the role of sentiments.
8. Generated by psychology: reasons. The role of the sentiments should not be overstated. For while all must agree that reasons aren’t sufficient to bring about intentions to act in pro-social people, sentiments aren’t sufficient to explain the distinctively sane and practical quality of moral claims. If the only thing behind moral claims were expressions of ‘boo’ and ‘hooray’, then you couldn’t make arguments which appeal to evidence, or have rational conversations about what ought to be done. But that is clearly false: not all moral blame is piacular. [The non-cognitive position of ’emotivism’ probably owes the most to the formulation given by A.J. Ayer.
9. So neither reasons nor passions are individually sufficient to account for the distinctiveness of moral claims, and their efficacy in producing intentions to act. However, they may be jointly sufficient. (Some may argue that reason or passion necessarily precedes the other; but, it is more likely that they are mutually supporting. If desires produce an intention to act, we call it eudaimonia; and if the desire coming after the action, then we call it eleutheronomia.) [The division between those forms of moral cognition owes to I. Kant, though I have modified the categories for my own purposes.]
10. Prerequisite of authority. What makes moral claims true or false, we remember, is the degree to which we think they are trustworthy, conferred by the right kind of authority. And when people base their advice on little more than intuition or feeling, untempered by deliberation, we all have a basic sense that this advice is not to be trusted if there are alternatives. The reason it cannot be trusted is that the authority has no integrity; and if they lack integrity, then all other things equal, you don’t have a reason for believing what they say. [The importance of integrity was a trenchant theme in the work of B. Williams]
11. Requirements of integrity. Integrity implies two things. First, it implies that the advice given or accepted has been, in some sense, voluntarily adopted — that it is not enacted by rote. Second, integrity implies sincerity; and sincerity implies non-arbitrariness in one’s convictions. By improving the coherence of your beliefs, you become more distinctive as a person, and any moral claims that you assert begin to take on a veneer of plausibility.  [The idea of wholeheartedness owes to H. Frankfurt.]
12. One of the potential downsides of looking at moral claims as advice is that it raises a difficult question: “Who can you trust, and when?” We can gesture towards a few characteristics, like “reasonable” or “social”, but it seems as though it is a fact of the matter that people tend to trust their familiars and close associates more than they trust strangers. And if that is true, then it is very hard to see morality is the sort of thing that could apply between strangers. This is a genuine dilemma, since it cannot be taken as an item of faith that the human race shall continue along the track to have a heightened sense of maturity and enlarged sympathies.  [J.S. Mill, in Utilitarianism, articulated a kind of optimism about the moral capacity that I’m skeptical of in this passage.]