Innate games and constitutive norms

It’s absurd to say that the game of soccer is innate. Why? Because it’s silly to think that the information encoded in our genes gives expression to phylogenetic traits on minimal triggering and which track the complex set of rules that make up ‘soccer’. Similarly, it is absurd to talk about most games as innate — chess, badminton, Uno, and so on.

Indeed, you’d expect this point to apply to all games. But maybe it doesn’t. For, here’s a proposal: a game isn’t much more than a set of playable tricks. And some tricks are, plausibly, innate under some general description. Example. When my dog plays catch, the ‘catch-and-return’ instinct seems like an innate trick, because it comes too quickly and too easily to too many dogs with a similar genetic makeup. Furthermore, the trick itself is pretty much all there is to say about the rules of game.

I’m cheating a little. Granted, the particular manifestation of the game that my dog (Sammy) plays cannot be reduced to its natural components. Typically, the game he plays is best done under a richer description — “Catch the Monkeyman”, owing to the fact that his chew toy was (in better days) vaguely monkey-man-shaped. And of course it would be weird to attribute to him a monkey-man-toy-responsive trait, given that I’ve seen other dogs play a similar game of catch without the need for monkeymen. Still, if you fudge the edges of the example, it looks like catch-and-return is a case of a game that is innate for the species.

That doesn’t mean that all games are innate. Presumably, few are. What is interesting to me is that there is a predictable structure to games, as many of our games correspond to assemblages of these favorite natural tricks. Moreover, the rich description of a game probably far exceeds what you would get if you cobbled together all the natural tricks it takes to play it, in the same way that the “Monkeyman” description exceeds the catch-and-return game.

That said, if you could describe the essential or enduring structure of a game in terms of its natural tricks, you might have a stronger basis for talking about which norms are truly constitutive of the game. So, e.g., despite its name, “Catch the Monkeyman” is not really about the Monkeyman. Similarly — shifting examples to one that is more philosophically interesting — if we want to talk about truth as the constitutive norm of the game of assertion, we should be ready to talk about a truth-directed representational trick in our minds, and which provides structure to the activity.

On public assertion

Since 2006 or so, I have thought that the idea of a knowledge as constitutive norm of assertion is a mistake, and have at various points offered various reasons for saying so. Some depend on my views about the nature of ‘truth’, on ‘belief’ and ‘intuition’, philosophical pedagogy, and other things. The upshot, I guess, is that Moore’s paradox — “P, but I don’t know that P” — is indeed permissible to assert when the contents of P are apt without being truth-apt (e.g., indefinite predicates and other forms of factually defective discourse). Since critiques of the knowledge norm have been explored capably by others, there is no point in my continuing to grind that axe here.

Recently, though, part of me has worried that our current epistemic crisis in politics is a real-world consequence of denying that knowledge is constitutive of assertion. It would be an awful shame if any of these points somehow blessed the hearts of populist liars and career-long bullshitters. A similar worry need not extend to the sphere of politics, though, as some have wondered whether published works in philosophy should obey something like a knowledge or sincere belief norm.

So, it might help to make a crucial distinction. Indeed, I do think knowledge constitutes something: namely, it constitutes the context of *public assertion* — i.e., following Arendt, the context where people are treated as provisional equals, where interlocutors have presumptive reasons to take each other seriously as givers and takers of reasons (e.g., during peer disagreement). That gives rise to our deep conviction that Moore’s paradox is intolerable in Orwellian spaces.

The diagnosis, then, isn’t that our epistemic crisis can’t be properly seen as coming out of a disagreement about a rarefied paradox. It comes out of the fact that public discourse has collapsed, and there are no institutions that incentivize us to look at each other as if we share a common cause. And that seems not only far more plausible than a worry about philosophy of language, it connects much directly and obviously with the facts about material class inequalities which are so obviously central to our current slide into fascism.

It is our lot to reason why

Abstract. An account of the nature of inference should satisfy the following demands. First, it should not be grounded in unarticulated stipulations about the proprieties of judgment; second, it should explain anomalous inferences, like borderline cases and the Moorean phenomenon; and third, it should explain why Caroll’s parable and tonk are not inferences. The aim of this paper is to demonstrate that the goodness-making approach to inference can make sense of anomalous inferences just in case we assume the proper functioning of two specific kinds of background capacities, related to the integration of information during categorization, and norms of disclosure which govern the conditions for assent. To the extent that inference depends on these background capacities, its normativity is best seen as partly deriving from facts about our cognitive lives, not from mere stipulation.

Continue reading “It is our lot to reason why”

Who killed the Agrippan trilemma?

Are most logical fallacies defective? Below, I will argue that the answer is ‘yes’. That is, I shall argue that a great many logical fallacies do not themselves provide even prime facie grounds for rational doubt, even when applied in the standard appropriate context.

Skeptics in the ancient world referred to the “Agrippan trilemma”, which is a set of three fallacies — infinite regress, arbitrary stipulation, and circularity — that are supposedly shared all (or virtually all) arguments. However, the state of the Agrippan trilemma isn’t what it used to be. A complaint about infinite regress is wholly uninteresting to the defender of infinitism; complaints about arbitrary assumptions are ultimately of no consequence to the foundationalist; the complaint about circularity has no traction for the coherentist. None of these views are absurd (though some are late bloomers). (Even the idea that all contradictions are false is now suspect, if you’re a dialetheist, though this is far more controversial.)

These are, admittedly, signs that the relevant fallacies are weakened by substantive modern-day philosophical positions, not reason to think the fallacies have lost their prime facie luster. But this post is not just about those fallacies. It is, instead, a worry (or observation) about a broader tendency — which is that a war of attrition is being fought against skepticism, and that skepticism appears to be on the losing side.

Let’s consider some other examples.

The “strawperson fallacy” is hard to take seriously when many quite good articles in philosophy engage in a refutation of ideal-types of a cluster or syndrome of related arguments in a corpus. On one very plausible reading of the concept of intuition, the “appeal to incredulity” is simply an appeal to intuition under a guise, which has a non-trivial (though limited) role in legitimate inquiry. Since intuitions are an intellectually complex form of feeling, “appeals to emotion” must also be valid on occasion: in particular, when pointing to the difference between inferences and mere associations.

The “slippery slope fallacy” is hard to reconcile with a standard worry issued in critical theory, which is that inquiry has to take into consideration the consequences of the thing being posited. If my conception of “racism” or “sexism” has pernicious consequences, then that would seem to count as a reason against that conception, irrespective of its empirical plausibility. The reason is *not* because we think justice trumps truth, but because we acknowledge that social groups are interactive kinds.

If Kuhn is right, then “special pleading” is routine in the natural sciences. When you are confronted with a surprising and seemingly unnatural result, the right heuristic is to assume you did the experiment wrong. Potential falsifiers show up all the time, and nobody cares, because these would-be falsifications are probably just mistakes. See, e.g., cold fusion.

If we are to have any respect at all for the dignity of other groups to define their own self-conceptions, then we end up having to concede that “ad hominems” are legitimate when they are levelled against speakers who have crossed epistemological jurisdictions, and the assertion of what counts as a “true Scotsman” is legitimate when asserted within the scope of those jurisdictions. “Bandwagons” and the “genetic fallacy” are legitimate under the same conditions.

On the face of it, “appeal to authority” would make nonsense of legal positivism (and, in my opinion, the entirety of moral discourse), which if true would be pretty good reason to think it is a hasty accusation. Also, the accusation latent in the “tu quoque fallacy” seems to undermine a vital presupposition of moral claims, which is that the person who asserts a moral claim has some kind of shared access to the conditions that make the rational authoritativeness of the claim. Hence if I say “stealing is wrong,” and I am a thief, then not only can you accuse me of hypocricy — you can also infer that I am no justification for believing that stealing is wrong. Since the burden of proof is on me to provide that justification, then all other things equal, you can forbear from deferring to what I have said: i.e., that stealing is wrong. But maybe, when it comes to some subjects, the burden of proof does not lie in the one who asserts, but instead in any interested party. In that case, “tu quoque” remains a fallacy, though the idea of “burden of proof” looks like it has some holes in it.

They say that “the plural of anecdote is not data”. Taken literally, this is nonsense: if anecdotes were not even data, they would be so fully uninstructive as to be unintelligible. Data is the informational equivalent of garbage. But what people mean is that anecdotes are not evidence — that is, it is not on the face of it public reasons for belief in the truth of some proposition. Or, alternatively: it may be data, but it is not good data.

Still, while anecdotes are not public reasons for belief, they surely are private reasons for belief insofar as the stories we tell ourselves about our experiences are involved in the production and reproduction of accurate memories. The plural of “anecdote” is not “evidence”, but rather, “narrative”. And since my identity is partly made up of my honest narrative, anecdotes had better count for something in a factual story about who I am.

In the above, I presented a litany of arguments against many fallacies. According to the arguments I introduced, the fallacies themselves are defective — either they apply in a narrower [or otherwise qualitatively different] range of contexts than is usually advertised, or they apply across those contexts with null [or diminished] force. Each is grounded in arguments offered in the philosophical literature, and I find most of them persuasive. In any case, if they reflect any limitations or qualifications related to the ordinary ‘introduction to critical thinking’ toolkit, then it seems like it must be pedagogically important for students to learn that relatively early on.

Of course, I should stress that the list is incomplete. I have ignored other fallacies which I do not really have occasion to doubt — the gambler’s fallacy, false dichotomy, loaded question, begging the question, false cause, appeal to nature, composition/division, Texas sharpshooter, and the middle ground. I do not set them apart from criticism because there is nothing to say against them — to the contrary, e.g., I might say that current citation practices in philosophy are less about rigorous meta-analysis and more about “Texas sharpshootin'” and then construct an immanent critique. I do it only because the arguments are, in my view, rather flimsy, and I don’t want to waste anyone’s time with critiques that depart from good taste.

[Note, 2019 — slight change in the wording for the sake of precision. Any substantive changes in meaning have been put in square brackets.]

Sophiboles: or, cases of cooperative misleading

I am still thinking about misleading and truth from an interesting and thought-provoking talk by Jennifer Saul last week. Many of my intuitions have gained form and structure from her presentation. In it, she argued that misleading and lying are not (all other things equal) morally different. Importantly, Saul suggested that misleading can be different than lying in one special subset of cases — effectively, in those contexts where the listener can be reasonably expected to have special duties to scrutinize the testimony before them, owing to the adversariality of the context and the capacity of the listener to engage in critical inquiry.

I have long had reservations about academics and the subject of truth-telling. So, here’s an essay from 2006: (http://www.butterfliesandwheels.org/…/who-needs-sophistry-…/) In it, I argued that the public assertion of certain kinds of exaggeration are sometimes both faultless and laudable. Over the past decade I have had plenty of occasion to have that thesis challenged, but am generally unpersuaded by those challenges.

In that essay I argue that philosophers and scientists frequently engage in a kind of wise exaggeration, which I have mentally given the label of “sophiboles”. That is, we faultlessly assert things in a black-and-white bivalent fashion, when the closest justified belief is much more complex. Example. According to his critics, Galileo was guilty of asserting a sophibole when he decided to cast aside fictionalist and probabilist readings of the evidence; and for what it’s worth, I’m inclined to say that he is guilty of doing right. (Anyway, this is my simplistic conception of the history, and reminds me I really ought to read Alice’s Dreger’s 2015 book, “Galileo’s Middle Finger”. But for now it’ll suffice as a toy case.)

Are sophiboles cases of misleading? Much depends on how you define “misleading”. To me, “misleading” involves distracting someone away from apprehending a true proposition that is worth caring about in a conversational context, and hence to cue belief in a falsehood, or distract away from a truth, without explicitly thereby asserting a falsehood. (It is hard not to include reference to what conversation partners care about if we are to assess them in terms of the cooperative maxims.)

Unlike most cases of misleading, sophiboles are constructively focusing our attention upon *true* beliefs worth caring about, and are not directed towards the malicious creation of false beliefs. e.g., for Galileo, the truth of the theory of heliocentrism as a model of the solar system; it is not to inculcate a false belief in the solar system. Suffice it to say, Galileo did not lie in any of this; he did not assert a falsehood. Moreover, his intention was to lead us to a truth about the world, not to lead us to a falsehood.

But that will not save his sophibole from being a case of misleading, since people in a cooperative conversation can be concerned with different things, and they can disagree about the truths worth caring about in such contexts, so long as those cross-purposes are jointly acknowledged. So, the Church — wanting Galileo to tone down his rhetoric — encouraged him to adopt a probabilist or fictionalist vernacular. Those little qualifiers (i.e., “In all probability, p…”) mattered to them. For them, Galileo was attempting to mislead away from the epistemic, or second-order, status of his claims. Galileo’s actual heliocentric claims were true, but (according to his critics) the realist statement of his claims misled people from the form of justification, and in that sense were distracting people away from an important truth about the limits of our knowledge. Galileo was misleading about something worth caring about.

To be sure, Galileo’s highly politicized insistence on realist rhetoric soon evoked an adversarial context. And, FWIW, I would even argue that he was right to be adversarial, because while neither departed from intellectual good faith, it is the case that the Church’s epistemic concerns are not so much worth caring about as the realist ones are. (There’s that famous middle finger of his.)

But that’s a historical contingency. My point is that we should be able to see the two parties continuing to accuse each other of misleading even if they had been able to maintain a cooperative dialogue. And so misleading, at least in the form of sophiboles, is generally not so bad as lying.

Richard Rorty on truth, deference, and assertion

Rorty, Richard. “Putnam and the Relativist Menace,” Journal of Philosophy 1993 vol. XC (9) pp. 443

One of the more frenetic topics in contemporary epistemology is warranted assertability — i.e., what it is rational to put forward as an assertion. Much of the issue depends on what the whole point of an assertoric speech act is supposed to be, and whether or not the point of assertion can be articulated in terms of constitutive norms. Some folks like Timothy Williamson argue that you are only warranted in asserting things you know, since the whole point of assertion is to transfer knowledge. If you assert something you don’t know, then the listener is entitled to resent the assertion, and (presumably) it is also rational for you to be ashamed of having made the assertion. Others argue that this is a very high bar, and that it makes more sense to say that you might be warranted to just assert a reasonable belief. If you assert something as true, without actually knowing it is true, then it might not be rational for you to be ashamed of yourself, nor does it follow that others are entitled to resent you for what you’ve said.

What does Richard Rorty think? Rorty argues that you are only warranted in asserting something so long as what you say is acceptable in a linguistic community. “So all ‘a fact of the matter about whether p is a warranted assertion’ can mean is “a fact of the matter about our ability to feel solidarity with a community that views p as warranted.”” (p.452-453) Rorty argues that the conditions where it is warranted to assert are relative to how we feel about the views that would be held by an idealized version of our own community. That is the sense in which he’s a relativist. What you say in one speech community might be assertable, and what you say in another would be totally verboten. As far as Rorty is concerned, assertability is concept that belongs to sociology and not epistemology.

For Rorty, the meaning of “our community” or “our society” is determined by common ground. For example, he uses the term “wet liberalism” to describe the community that Rorty and Putnam share, as if the fact that they both belonged to the liberal tradition was what set them into the same community. (p.452) (I don’t think that it’s necessary for us to make reference to political ideology when we talk about “our linguistic community”, but it’s at least one candidate.) Whatever criterion you use to pick out the relevant linguistic community, there is a sense that you have got to be in solidarity with that community. (453-54) The upshot: for the purposes of making a rational assertion, you’ve first got to assume you’re part of a common trust.

Now for the weird, relativist twist: Rorty thinks truth is all about deference to the idealized community of future knowers. If you say, “Rutabegas are red”, then that claim is true just in case a future idealized version of yourself would say it too. So long as Rorty is concerned with the notion of truth, he thinks we are interested in whether or not an idealized future society of knowers would affirm or deny what we’ve said. (p.450) Truth is just a vague term of approbation, synonymous with truth; and, evidently, trust is the ultimate truth-maker.

Trust as a truth-maker [tpm]

Daniel Everett entered Brazil as a Christian missionary. Then he encountered the Piraha people, a community that is indigenous to Brazil, and lived among them for a while. And as a result of encountering the Piraha, he lost his faith.

The Piraha are interesting for a great many reasons, foremost among them being that their culture is based on immediate experience. Everett describes them as “the ultimate empiricists”, because they have no respect for explanations of remote facts. For example, when Everett attempted to convey stories of Jesus and the sermon on the mount, his efforts were laughed off as credulous or delusional, since Everett had not witnessed the sermon firsthand.

This is just to say that, for all intents and purposes, the Piraha endorse a kind of evidentialism. Evidentialism is the idea that we have a responsibility to only believe things in proportion to the evidence. Compare that to the missionary Everett, who was a fideist — meaning, he believed certain religious claims were true on the basis of choice, commitment, and faith.

In a sense, the difference between the missionary Everett and the Piraha echoes an argument in epistemology. W.K. Clifford, a sabre-rattling epistemologist from yesteryear, argued that it is a sin against humankind to believe something on insufficient evidence: to be deluded is to be irrational, and worse. Pragmatist philosophers like William James bemoaned Clifford’s hellfire, and defended the idea that an ethical belief can be supported by force of will. Contemporary evidentialists like Richard Feldman and Earl Conee have goals that are slightly more modest than those Clifford had. Feldman and Conee argue that it is epistemically mistaken to believe out of proportion to the evidence.

I am an evidentialist, in the sense that I think evidentialism is platitudinous — it is surely correct to say that all objective knowers ought to apportion their beliefs to the evidence. But I also think that evidentialism is relatively trivial — evidence and volition are not mutually exclusive. Following the constructionism of John Searle, it turns out that sometimes you can believe in a proposition, and — bizarrely — trust counts as strong evidence in favor of the truth of the belief.

~

A pastor stands before his assembled flock at mass. The pastor has noticed that over the past few weeks donations in the collection plate have been diminishing. For a brief moment, he suspects there may be a thief around. On this particular day, the pastor has privately observed that a particular teenage boy has snatched some donations from the plate as it makes its rounds. A calm immediately passes over the pastor’s mind. For though the pastor knows that the boy is prone to mischief, the pastor also knows that they are otherwise impressionable and pious. Now suppose the pastor, in his sermon, mentions the mystery of the diminishing funds. In the midst of his speech, he sincerely endorses this proposition:

  1. I know that no-one who is part of this congregation is a thief in their heart.

The pastor says this with all appropriate showmanship – credulous intonations, sweeping gestures – in order to convey his belief that the congregation is made up of virtuous souls. But since the pastor has observed the boy taking the money, we should say that the pastor has made an utterance that is contrary to the external evidence, and is unjustified.

Let (t-1) be the belief in (1) prior to the utterance, and let (t-2) be the belief in (1) after the utterance.

Insofar as we think that (1) is the expression of the pastor’s own sincere beliefs, we might think that the utterance is faulty. Strictly speaking, his prior belief (t-1) is a delusion, since it is a belief that is directly contrary to the external evidence.

Yet the effect of the pastor’s words and bearing is as if it had conveyed a secret message to the boy: I know what you have done, and now you know that I know. As a result of the pastor’s utterance, the boy quietly defers to the pastor. Ashamed at his petty crime, the boy resolves to never steal again, and immediately returns the funds to the plate.

What is remarkable about this case is that simply by uttering (1), the pastor has at the very same moment (with the cooperation of the intended audience) brought about the state of affairs described by (1). The pastor’s prior delusion (t-1) suddenly transformed into an objective fact of the matter after it had been expressed (t-2). The utterance (1) is very much like what John Searle called a status function declaration. The assertion is true because the pastor represented it as true, and it was taken as true by the boy.

In short, the pastor made up the facts — and he got away with it. And “getting away with it” for the right sorts of reasons is all that is required to make the claim true.

~

In the above example, trust is the thing that makes (1) true. But of course, this is not a feature of all — or even most — evidential claims. No matter how much you trust a homeopath, trust alone will not make their snake oil work.

I think there is quite a lot to recommend the idea that trust can make some claims true. For one thing, it makes sense of the tenaciousness of systematic illusions — the illusions involved in organized religion, for instance — in such a way that we are capable of attributing rationality to them at some level. (Since the presumption of rationality is essential to social scientific explanations, this is only bad news for the cynic.) For another thing, it gives an account of how effective threats to those institutions pose a rational existential crisis in those who buy into them. As the Catholic Church has learned in Ireland, breaches of trust can be both morally outrageous and world-breaking.

(And to their credit, some ancient institutions will occasionally recognize the theoretical limits of their supposed magesteria. For instance, according to Catholic dogma, even the Catholic Pope’s infallibility is limited to its use ex cathedra. So if Mr. Ratzinger were to declare that the Earth has sixteen moons, then he would not be speaking from the chair of Peter, and hence not saying something true.)

So there’s no need to worry that recognizing trust as a truth-maker will lead to an epistemic disaster, and there are some good reasons to think that it makes sense of how the social world works. But even so, this is still a disturbing line of argument. For any free-thinking person who is not dead from the neck down, the idea that authorities can just make facts up from out of nowhere is a complete and utter scandal. And the above argument confounds the initial motivation for evidentialism, which is to reject the idea that wishful thinking can be conducive to rationality.

So the disturbed evidentialist might explain the pastor’s story by saying that at any particular moment in time, trust is never a part of the evidence. The idea is that the prior belief (t-1) and the subsequent belief (t-2) can only be judged on their own terms, and not compared to one another. As such, it would turn out that (t-1) is just the pastor’s delusion, and (t-2) is made true by the decision of the boy — in both cases, trust is not the truth-maker. In other words, the account would have to be synchronic (at one time), not diachronic (across time). This is consistent with what Feldman suggests in his essay “The Ethics of Belief”, when he claims that evidentialism is best seen as a synchronic theory of rationality, not a diachronic one.

If we don’t believe that trust counts as evidence at the level of the diachronic, then we’d have to say that trust is (at worst) a merely sociological event that is of no philosophical interest, and (at best) involves a non-epistemic sense of justification (e.g., as Feldman suggests, a prudential one).

And while I agree that trust is a prudential notion about how we ought to pursue our personal projects as human beings, it seems that trust is also a conception of how we ought to conduct ourselves as responsible knowers. Trust is the causal link between (t-1) and (t-2) that made the boy acquiesce; furthermore, trust is the boy’s evidence for accepting the testimony of the pastor as true, and not just as the pastor’s interesting opinion; and trust is the reason why (1) really is true, since (1) is only true through deference, and there cannot be any genuine deference without trust. And, finally, if either the pastor or the boy had lacked trust, but all other events had remained the same, then we would have grounds to think that the pastor simply was not warranted in asserting (1).

~

In antiquity, the word “truth” (derived from “troth”) meant faithfulness, good faith, or loyalty. I’ve suggested here that there is one special context in which truth has retained its initial connotations.

I only worry that the Piraha would not approve.

—–

(Corrected Feb 20: it’s the “chair of Peter”, not the “chair of David”. Apologies.)

Realisms [tpm]

[Originally posted at Talking Philosophy Magazine blog]

Abstract: Sometimes it is thought that the fate of philosophy itself is tied to the debate between realism and anti-realism. According to one plausible rendering of the difference between realism and anti-realism in metaphysics belonging to Crispin Wright, “realism” is a modest doctrine, while “idealism” is immodest. If anyone was an idealist, Bishop George Berkeley certain was one. I argue that, by most lights, Berkeley’s metaphysics was modest, which (surprisingly) makes him a realist. The upshot is either that Wright’s articulation of the realist/anti-realist distinction is off-base, or there is less to the realist/anti-realist distinction than meets the eye. I suspect the latter.

Continue reading “Realisms [tpm]”