It is our lot to reason why

Abstract. An account of the nature of inference should satisfy the following demands. First, it should not be grounded in unarticulated stipulations about the proprieties of judgment; second, it should explain anomalous inferences, like borderline cases and the Moorean phenomenon; and third, it should explain why Caroll’s parable and tonk are not inferences. The aim of this paper is to demonstrate that the goodness-making approach to inference can make sense of anomalous inferences just in case we assume the proper functioning of two specific kinds of background capacities, related to the integration of information during categorization, and norms of disclosure which govern the conditions for assent. To the extent that inference depends on these background capacities, its normativity is best seen as partly deriving from facts about our cognitive lives, not from mere stipulation.

Continue reading “It is our lot to reason why”

Notes on J.J. Thomson’s “Normativity”

Reading Thomson’s ‘Normativity’. I have questions.

Consider the following statement:

[1.] I prefer watching ‘Game of Thrones’ over watching ‘Penny Dreadful’.

Many preferences originate in intuitions, and in that sense, begin their life as [1] or something like it. That’s fine, sort of, but there are better ways of talking about my preference. e.g.:

[2.] I prefer watching ‘Game of Thrones’ over watching ‘Penny Dreadful’, insofar as these are both good dark fantasy shows.

[3.] I prefer… insofar as these are the only good things on television right now.

[4.] I prefer… insofar as they are good dark fantasy shows considering what is on right now.

[5.] I prefer… insofar as their opening themes are good to dance to.

(Each of these involves a different kind of standard for evaluation, ranked roughly from least surprising to most surprising.) [2-5] are reflective preferences, not just intuited ones. Reflective preferences are different from intuited preferences because they are tagged with some evaluative standard by which one thing is to be ranked in relation to another. That standard is laid out after the phrase, ‘insofar as’. Which is more useful for a practical theory of decision-making, intuitive or reflective preferences? Well, taken on face value, the reflective references are more informative and in a sense more rational than [1].

But — not so fast: [2-5] are also potentially inconsistent with each other. So, e.g., maybe I dislike dark fantasy shows, but hate everything else on TV: in that case, [3] would be a false statement, but [2] and [4] would be true ones. And even if [2-4] were true, [5] could be false, just in case Penny Dreadful’s opening theme is actually better to dance to than Game of Thrones’s.

One way to resolve the question in favor of the reflective mode of articulating preferences is to say that each time you introduce a new standard of evaluation (i.e., whatever follows the “…insofar as…”), you’re actually making a brand new list, that is itself eventually broken down into intuited preferences. e.g.:


1. Penny Dreadful

2. Game of Thrones

3. Shakira


1. Game of Thrones

2. Penny Dreadful

3. True Blood

And then you can put these very lists on a global list of preferences that looks like this:





At that point, you will seemingly have given up on the intuited sense of [1], since Game of Thrones doesn’t just exist in some amorphous BETTER THAN relation to Penny Dreadful. Instead you’ll have replaced the intuited preference with something more rational and informative. Indeed, at this point, once we have all these standards of evaluation under our belt, it is tempting to say that [1] is not a rational claim about the state of my preferences at all. But that’s too quick, because [1] could just be an elliptical version of one of [2-5]. And so, one might say, it is only correct to say that [1] is not a rational claim about the state of my preferences at this point so long as there is there is no rational formula for picking out a more precise context of evaluation.

So, one might say that there is always a strict default context, in which we have to interpret sentences like [1] into their least surprising form, e.g., [2]. Will this work? I have my doubts. That is, intuitively, I doubt that charity alone will help us to identify some context-invariant mechanical *formula* that tells us what the most rational default interpretation should look like. I suspect that, at best, when confronted with intuitive preference-statements like [1], we only have *defeasible strategies for interpreting* it in terms of one or more of [2-5].

Moving on to (Chapter IV): “Suppose a person asks us, “Was St. Francis better than chocolate?” What can he mean? In what respect ‘better than’ does he mean to be asking whether St. Francis was better in that respect than chocolate? If he says, “No, no, what I am asking is not for a certain respect R whether St. Frances was better in respect R than chocolate. What I am asking is just whether St. Francis was (simply) better than chocolate,” then he hasn’t asked us any question, so it is no wonder we can’t answer it.” She suggests that there is a bifurcation between attributing simple superiority to something and attributing superiority in some respect.

I do not know why these are incompatible modes of evaluation. Saying that something is simply better than another thing does not seem to imply that there is no respect R in which an evaluation can be made. There are obviously salient respects in which one might compare the value of chocolate and St. Francis (say, when comparing the value of vulgar hedonism vs. asceticism). But when one says one is simply better than the other, they aren’t wiping these respects off the map, denying them up and down and sideways. Instead, it seems to imply that it is not rationally necessary to specify the respect R in the course of answering the question — that the respects are being left implicit and uncommitted. I think what she means to say is that it is meaningless to ask, “Is chocolate worse than St. Francis, given that there is no respect in which they might be compared?” But that’s kind of obvious.

Sophiboles: or, cases of cooperative misleading

I am still thinking about misleading and truth from an interesting and thought-provoking talk by Jennifer Saul last week. Many of my intuitions have gained form and structure from her presentation. In it, she argued that misleading and lying are not (all other things equal) morally different. Importantly, Saul suggested that misleading can be different than lying in one special subset of cases — effectively, in those contexts where the listener can be reasonably expected to have special duties to scrutinize the testimony before them, owing to the adversariality of the context and the capacity of the listener to engage in critical inquiry.

I have long had reservations about academics and the subject of truth-telling. So, here’s an essay from 2006: (…/who-needs-sophistry-…/) In it, I argued that the public assertion of certain kinds of exaggeration are sometimes both faultless and laudable. Over the past decade I have had plenty of occasion to have that thesis challenged, but am generally unpersuaded by those challenges.

In that essay I argue that philosophers and scientists frequently engage in a kind of wise exaggeration, which I have mentally given the label of “sophiboles”. That is, we faultlessly assert things in a black-and-white bivalent fashion, when the closest justified belief is much more complex. Example. According to his critics, Galileo was guilty of asserting a sophibole when he decided to cast aside fictionalist and probabilist readings of the evidence; and for what it’s worth, I’m inclined to say that he is guilty of doing right. (Anyway, this is my simplistic conception of the history, and reminds me I really ought to read Alice’s Dreger’s 2015 book, “Galileo’s Middle Finger”. But for now it’ll suffice as a toy case.)

Are sophiboles cases of misleading? Much depends on how you define “misleading”. To me, “misleading” involves distracting someone away from apprehending a true proposition that is worth caring about in a conversational context, and hence to cue belief in a falsehood, or distract away from a truth, without explicitly thereby asserting a falsehood. (It is hard not to include reference to what conversation partners care about if we are to assess them in terms of the cooperative maxims.)

Unlike most cases of misleading, sophiboles are constructively focusing our attention upon *true* beliefs worth caring about, and are not directed towards the malicious creation of false beliefs. e.g., for Galileo, the truth of the theory of heliocentrism as a model of the solar system; it is not to inculcate a false belief in the solar system. Suffice it to say, Galileo did not lie in any of this; he did not assert a falsehood. Moreover, his intention was to lead us to a truth about the world, not to lead us to a falsehood.

But that will not save his sophibole from being a case of misleading, since people in a cooperative conversation can be concerned with different things, and they can disagree about the truths worth caring about in such contexts, so long as those cross-purposes are jointly acknowledged. So, the Church — wanting Galileo to tone down his rhetoric — encouraged him to adopt a probabilist or fictionalist vernacular. Those little qualifiers (i.e., “In all probability, p…”) mattered to them. For them, Galileo was attempting to mislead away from the epistemic, or second-order, status of his claims. Galileo’s actual heliocentric claims were true, but (according to his critics) the realist statement of his claims misled people from the form of justification, and in that sense were distracting people away from an important truth about the limits of our knowledge. Galileo was misleading about something worth caring about.

To be sure, Galileo’s highly politicized insistence on realist rhetoric soon evoked an adversarial context. And, FWIW, I would even argue that he was right to be adversarial, because while neither departed from intellectual good faith, it is the case that the Church’s epistemic concerns are not so much worth caring about as the realist ones are. (There’s that famous middle finger of his.)

But that’s a historical contingency. My point is that we should be able to see the two parties continuing to accuse each other of misleading even if they had been able to maintain a cooperative dialogue. And so misleading, at least in the form of sophiboles, is generally not so bad as lying.

“Ought implies can, if ought implies must”

Here is an argument. If ought implies can, then cannot implies not ought. Not ought implies either permission or ought not. But permission implies can, since one cannot permit what is impossible. So cannot implies ought not. If can is a descriptive term, then this shows how to derive an ought from an is.

[Update from August 21]

Wesley Buckwalter brought it to my attention that, actually, very few subjects will agree that moral ought implies moral can. This appears to be reason to dispute the argument above.

I think part of my conviction that ‘ought implies can’ has been undermined in part because there is an ambiguity to the word ‘ought’. It seems to me that ‘ought’ implies either ‘should’ or ‘must’. While ‘should’ only implies ‘sometimes can’, ‘must’ implies ‘can’ in some broader sense. And the moralist tends to be more interested in imperative-like statements (e.g., musts over shoulds).

Intuition and faultless assertion

Abstract. Here are three compelling proposals. a) Social ontology of speech acts: since the constitutive norms that give structure to our basic linguistic institutions are self-imposed restrictions upon the practice of participants, those linguistic institutions must be highly sensitive to the attribution of fault. b) Intuition-skepticism: intuitions do not count as evidence of the truth of an associated proposition. c) The knowledge norm: knowledge is constitutive of assertion. In what follows I demonstrate that the three theses are incompatible. For my part, I think we should abandon (c).

Biographical note: This short (2013) paper met a sudden death after I was advised by an editor at a flagship journal in professional philosophy that I could not take conclusions published by well-known and respected philosophers, Timothy Williamson and Margaret Gilbert, as premises in my own argument. The editor’s judgment strikes me as perverse unhelpful, and it was despiriting (which was presumably the intended effect). As a result I shelved the draft, and it quickly became obsolete as other authors published on similar themes in other venues.

Continue reading “Intuition and faultless assertion”

Trust as a truth-maker [tpm]

Daniel Everett entered Brazil as a Christian missionary. Then he encountered the Piraha people, a community that is indigenous to Brazil, and lived among them for a while. And as a result of encountering the Piraha, he lost his faith.

The Piraha are interesting for a great many reasons, foremost among them being that their culture is based on immediate experience. Everett describes them as “the ultimate empiricists”, because they have no respect for explanations of remote facts. For example, when Everett attempted to convey stories of Jesus and the sermon on the mount, his efforts were laughed off as credulous or delusional, since Everett had not witnessed the sermon firsthand.

This is just to say that, for all intents and purposes, the Piraha endorse a kind of evidentialism. Evidentialism is the idea that we have a responsibility to only believe things in proportion to the evidence. Compare that to the missionary Everett, who was a fideist — meaning, he believed certain religious claims were true on the basis of choice, commitment, and faith.

In a sense, the difference between the missionary Everett and the Piraha echoes an argument in epistemology. W.K. Clifford, a sabre-rattling epistemologist from yesteryear, argued that it is a sin against humankind to believe something on insufficient evidence: to be deluded is to be irrational, and worse. Pragmatist philosophers like William James bemoaned Clifford’s hellfire, and defended the idea that an ethical belief can be supported by force of will. Contemporary evidentialists like Richard Feldman and Earl Conee have goals that are slightly more modest than those Clifford had. Feldman and Conee argue that it is epistemically mistaken to believe out of proportion to the evidence.

I am an evidentialist, in the sense that I think evidentialism is platitudinous — it is surely correct to say that all objective knowers ought to apportion their beliefs to the evidence. But I also think that evidentialism is relatively trivial — evidence and volition are not mutually exclusive. Following the constructionism of John Searle, it turns out that sometimes you can believe in a proposition, and — bizarrely — trust counts as strong evidence in favor of the truth of the belief.


A pastor stands before his assembled flock at mass. The pastor has noticed that over the past few weeks donations in the collection plate have been diminishing. For a brief moment, he suspects there may be a thief around. On this particular day, the pastor has privately observed that a particular teenage boy has snatched some donations from the plate as it makes its rounds. A calm immediately passes over the pastor’s mind. For though the pastor knows that the boy is prone to mischief, the pastor also knows that they are otherwise impressionable and pious. Now suppose the pastor, in his sermon, mentions the mystery of the diminishing funds. In the midst of his speech, he sincerely endorses this proposition:

  1. I know that no-one who is part of this congregation is a thief in their heart.

The pastor says this with all appropriate showmanship – credulous intonations, sweeping gestures – in order to convey his belief that the congregation is made up of virtuous souls. But since the pastor has observed the boy taking the money, we should say that the pastor has made an utterance that is contrary to the external evidence, and is unjustified.

Let (t-1) be the belief in (1) prior to the utterance, and let (t-2) be the belief in (1) after the utterance.

Insofar as we think that (1) is the expression of the pastor’s own sincere beliefs, we might think that the utterance is faulty. Strictly speaking, his prior belief (t-1) is a delusion, since it is a belief that is directly contrary to the external evidence.

Yet the effect of the pastor’s words and bearing is as if it had conveyed a secret message to the boy: I know what you have done, and now you know that I know. As a result of the pastor’s utterance, the boy quietly defers to the pastor. Ashamed at his petty crime, the boy resolves to never steal again, and immediately returns the funds to the plate.

What is remarkable about this case is that simply by uttering (1), the pastor has at the very same moment (with the cooperation of the intended audience) brought about the state of affairs described by (1). The pastor’s prior delusion (t-1) suddenly transformed into an objective fact of the matter after it had been expressed (t-2). The utterance (1) is very much like what John Searle called a status function declaration. The assertion is true because the pastor represented it as true, and it was taken as true by the boy.

In short, the pastor made up the facts — and he got away with it. And “getting away with it” for the right sorts of reasons is all that is required to make the claim true.


In the above example, trust is the thing that makes (1) true. But of course, this is not a feature of all — or even most — evidential claims. No matter how much you trust a homeopath, trust alone will not make their snake oil work.

I think there is quite a lot to recommend the idea that trust can make some claims true. For one thing, it makes sense of the tenaciousness of systematic illusions — the illusions involved in organized religion, for instance — in such a way that we are capable of attributing rationality to them at some level. (Since the presumption of rationality is essential to social scientific explanations, this is only bad news for the cynic.) For another thing, it gives an account of how effective threats to those institutions pose a rational existential crisis in those who buy into them. As the Catholic Church has learned in Ireland, breaches of trust can be both morally outrageous and world-breaking.

(And to their credit, some ancient institutions will occasionally recognize the theoretical limits of their supposed magesteria. For instance, according to Catholic dogma, even the Catholic Pope’s infallibility is limited to its use ex cathedra. So if Mr. Ratzinger were to declare that the Earth has sixteen moons, then he would not be speaking from the chair of Peter, and hence not saying something true.)

So there’s no need to worry that recognizing trust as a truth-maker will lead to an epistemic disaster, and there are some good reasons to think that it makes sense of how the social world works. But even so, this is still a disturbing line of argument. For any free-thinking person who is not dead from the neck down, the idea that authorities can just make facts up from out of nowhere is a complete and utter scandal. And the above argument confounds the initial motivation for evidentialism, which is to reject the idea that wishful thinking can be conducive to rationality.

So the disturbed evidentialist might explain the pastor’s story by saying that at any particular moment in time, trust is never a part of the evidence. The idea is that the prior belief (t-1) and the subsequent belief (t-2) can only be judged on their own terms, and not compared to one another. As such, it would turn out that (t-1) is just the pastor’s delusion, and (t-2) is made true by the decision of the boy — in both cases, trust is not the truth-maker. In other words, the account would have to be synchronic (at one time), not diachronic (across time). This is consistent with what Feldman suggests in his essay “The Ethics of Belief”, when he claims that evidentialism is best seen as a synchronic theory of rationality, not a diachronic one.

If we don’t believe that trust counts as evidence at the level of the diachronic, then we’d have to say that trust is (at worst) a merely sociological event that is of no philosophical interest, and (at best) involves a non-epistemic sense of justification (e.g., as Feldman suggests, a prudential one).

And while I agree that trust is a prudential notion about how we ought to pursue our personal projects as human beings, it seems that trust is also a conception of how we ought to conduct ourselves as responsible knowers. Trust is the causal link between (t-1) and (t-2) that made the boy acquiesce; furthermore, trust is the boy’s evidence for accepting the testimony of the pastor as true, and not just as the pastor’s interesting opinion; and trust is the reason why (1) really is true, since (1) is only true through deference, and there cannot be any genuine deference without trust. And, finally, if either the pastor or the boy had lacked trust, but all other events had remained the same, then we would have grounds to think that the pastor simply was not warranted in asserting (1).


In antiquity, the word “truth” (derived from “troth”) meant faithfulness, good faith, or loyalty. I’ve suggested here that there is one special context in which truth has retained its initial connotations.

I only worry that the Piraha would not approve.


(Corrected Feb 20: it’s the “chair of Peter”, not the “chair of David”. Apologies.)

Morality — whether you want it or not

Originally published at Talking Philosophy Magazine blog.

Abstract. There are some good reasons for us to use the concept of “moral realism”, in the following sense: moral realism asks us to think of morality as independent of the will. It entails moral optimism — that all other things equal, the interests of the right will triumph. Moreover, it suggests that some interests are objective because we didn’t choose them. If moral claims are “real”, it’s because they have a force whether we want them or not. Yet if moral regularities are “real”, it is because it derives from instincts (sympathy and resentment) that are independent of the will. And, perhaps, instinctive sympathy and resentment are more important than the other parts of our psychology. If so, then moral realism is defensible because moral norms hold (for morally competent observers) whether we want them or not.

Continue reading “Morality — whether you want it or not”

Brief remarks on lying

Jean Kazez asks about lying (in the context of some internet drama):

It’s an interesting thing how offensive it is to be called, or even thought, a liar.  Liars don’t break anyone’s bones, but to be a liar is a really, really bad thing.  Why?

To be a liar is to betray, to do violence to a person’s projects by breaking their trust. Betrayals are the lowest form of devilry because they exploit the weaknesses of innocence in order to perform wrongs.

It’s offensive to be thought a liar for the same reason that liars are offensive. To be thought a liar, when one has not broken trust, is to have violence done to your projects even while you have kept trust. However, to be wrongly thought a liar also involves a degree of irony: you are having your projects violated and your trustworthiness questioned, for the reason that you are thought to have violated other projects and broken trust. That adds to the sting.

Brandon (in that thread) emphasizes the bad social consequences, as well. That’s true, of course. But that’s just to say that one’s projects, which involve cooperation with others, are going to be frustrated.

Lies of omission are more difficult to deal with. Unlike Jean, I’m not sure the protection of a source counts as a lie of omission, because the journalist surely acknowledges that there is a source from the start. For journalists, there is a representation of the anonymous source, X, and there is the actual person, (a); the journalist (tacitly) reveals that there is X, but does not reveal that it is (a). A lie of omission would involve failure to mention X, just in case that information were both not available to people and it would make a difference.

But that’s only the one case. A more general formulation of a lie, which captures both lies of omission and comission, might be whether or not X is relevant to the inferences that have been made in the discussion. We might call something “relevant” just in case it would change our verdicts concerning the truth-conditions of a great many statements raised in the discussion. (And let’s assume that truth-conditions of propositions have content that is knowable, finite, and at least to a reasonable degree reflective of the surface content of our utterances.)

Two relevant features there: the lie has to affect a “great many” statements, not just one, and it has to be about prior inferences in conversation, not future ones.

  • A great many statements. If I say, “The cat is on the mat”, it makes no difference whether I specify something about the color of the cat or the mat — unless the rest of the prior conversation involves a great many inferences about colors. If my leaving out the color of the cat and of the mat leads people to infer wrongly, and then continue on to conduct a conversation that is premised on those wrong inferences, then it is a lie of omission for me to participate in the conversation without flagging the counter-factual and speculative nature of what’s being said. Sadly, that has the consequence that it makes most contemporary puzzle-solving in philosophy to be the activity of liars.
  • On prior inferences. This seems like a necessary constraint, because relevance needs to be something we can talk about empirically, while future subjects of a conversation are not things we can arrive at by divination.

These features distinguish lies from other activities, including secrets. A mere secret makes no difference to the inferences in play, or only makes a difference to some irrelevant inference, or makes a differences to some possible inferences that nobody has expressed an interest in.