Abstract. An account of the nature of inference should satisfy the following demands. First, it should not be grounded in unarticulated stipulations about the proprieties of judgment; second, it should explain anomalous inferences, like borderline cases and the Moorean phenomenon; and third, it should explain why Caroll’s parable and tonk are not inferences. The aim of this paper is to demonstrate that the goodness-making approach to inference can make sense of anomalous inferences just in case we assume the proper functioning of two specific kinds of background capacities, related to the integration of information during categorization, and norms of disclosure which govern the conditions for assent. To the extent that inference depends on these background capacities, its normativity is best seen as partly deriving from facts about our cognitive lives, not from mere stipulation.
Recently, it has been lamented that we do not have a clear understanding of the nature of inference as such. In particular, it seems that we do not have a compelling account of the basing (or binding) relation between premises and conclusions. (Boghossian 2014) We do know that this basing relation involves the background assumption that the relevant passages of judgment ostensibly involve a mechanism in reasoning that functions to mitigate errors during the process: e.g., truth-preservation in non-defeasible formal inference. But I do not see a reason to believe that the presumption of error mitigation is unique to inference, as opposed to more general cognitive processes like perception or volition.
The absence of a full account should be keenly felt. For it is easy to anticipate a shift in the wider intellectual culture towards the view that notions of validity, fallacy, and entailment are processes that are illegitimate, i.e., imposed by fiat instead of through the discovery and diagnosis of relevant connections between judgments. (Fricker 2007, 3) Any philosopher battle-hardened by the Science Wars of the 90’s, or who is concerned with the rise of irrational demagoguery in the late ’10s, might think it quite sensible to identify existing developments in the study of inference and acknowledging areas of remaining concern.
To be sure, there are at least some sensible things we might be able to say about inference. On first approximation, we might say that the study of inference is more than just inquiry into implication or entailment in a deductive argument, but is instead about reasoned changes in view. (Harman 1986, 3-10) An investigation into inference can fruitfully expand on this formulation in three ways. First, by committing explicitly to the proposal that inference is a noun-like abstraction from the doing-word, ‘inferring’, which is an epistemically valuable process. Inferring is, at least facially, a way of knowing — or, at any rate, a way of managing what we know, broadly conceived. (Koziolek 2017) Second, by acknowledging that the prototypical function of inference is to manage truths in the process of inquiry — so to accept that the study of bare implication plays an important role in the study of inference, however one feels about the status of deductive logic as an exemplar. (Harman 1985, 3, 11-20) Third, by acknowledging that this process of bringing about beliefs in conclusions by way of beliefs in premises is a normative relationship. (Boghossian 2014, Broome 2014) These are all points that I take on board without further comment.
On this last point, however, it is sometimes held that the naturalist has no tenable account of the normativity of inference. (Boghossian 2014; Wright 2014) ‘Naturalism’, in the relevant sense, is the view that normative facts are either identical to, or issue from, natural facts. The allegation that naturalism cannot account for the normative is part of an established tradition in philosophy which casts doubt on the potential for causes to provide an adequate account of reasons.
However, the case against naturalism has mixed merits. Sure enough, a reasonable person might cast doubt on naturalistic reductions of reasons to causes. Moreover, there is plenty of room to doubt that an account of inference should be fully psychological, since our capacities considered in isolation seem to face the challenge of making sense of sharable mental contents, and hence threaten solipsism or relativism. Yet these plausible points exaggerate the case against naturalism. For it seems plausible to say that normativity between premises and conclusions emerges in part out of naturalistic processes which bind the inputs and outputs of judgment. And these natural processes are, ultimately, redescriptions of causal systems.
With few exceptions — e.g., Ralph Wedgwood’s ‘The Nature of Normativity’ (2007) — it sometimes seems as though naturalism has been given short shrift. At the time of this writing, much of the literature in analytic philosophy on the nature of inference is conspicuously silent about informational semantic and neuroscientific accounts of inference. Semantically, Ruth Millikan (1989) and Fred Dretske (1986) still need to be given a proper treatment, and the anti-naturalist strain of the literature on the nature of inference is conspicuously quiet about them. In neuroscience, information integration theory holds that inference is a valuation operation within a functionalist cognitive architecture (Anderson 2014; 104-107), thus being normative in the thin sense at issue. Meanwhile Eliasmith et. al. (2012), treat inference as the binding of neural representations in competition; but, again, treating inference naturalistically in terms of causal processes. One sometimes gets the impression that naturalism has been rejected unfairly.
So, in this essay, I will come to an indirect defense of naturalism. I shall make the case that we ought to reject a particular kind of anti-naturalism — the kind which attempts to account for inference through stipulation. e.g., by appealing to exotic explanatory primitives as unexplained explainers. Since my dialectical focus is to argue against the stipulative orientation, I will have to neglect any elaborate complement or contrast with other naturalisms (e.g., Wedgwood’s).
Strategy
For the sake of clarity, we should commit to a treatment of at least two issues. First, a suitable characterization of inference should enumerate which of those cognitive process(es) or abstract logical structures are even on the face of it distinctive of the phenomenon in question: error-discriminating epistemic processes of deliberation like inductive, deductive, analogical, and abductive reasoning, as opposed to epistemically defective processes like free association, guessing, or tonk. If such an account is able to shed light on canonical philosophical puzzles concerning inference — e.g., Caroll’s parable, or Heraclitus’s river — then all the better. Second, we might ask what criteria make it the case that the given processes manage those judgments in an appropriate fashion, yielding the sought-after basing relation between premises and conclusions. One might refer to the first issue as the ‘kinds problem’, and the latter, the ‘sortal problem’.
The best overall means of addressing to the sortal problem is to explain the fact that inferences are always good inferences under some description. One way to demonstrate the force of this principle is by turning one’s attention to the Moorean paradox of theoretical inference. It has been argued that we ought to regard the face-value property of goodness (or correctness, appropriateness, or legitimacy) that belongs to all inferential passages from premises to conclusions is a constraint on adequate theories of inference. (Hlobil 2014) I think that is a promising view, and call it the ‘quasi-goodness’ theory. The current study offers some friendly amendments to the suggestion that all inferences are legitimate passages of judgment, by showing how the quasi-goodness account can only be defended successfully if we can explain the successful features of anomalous inferences while acknowledging their defects. We can, in other words, go about doing philosophical logic by doing a kind of philosophical etiology. During the course of finding what makes anomalous inferences genuinely inferential, I hope to persuade you that the intractable goodness of inference issues in part from cognitive successes in managing processes of categorization.
Be that as it may, an account of the goodness of inference must be powerful enough to make sense of anomalous inferences, i.e., inferences that are (un)worthy of assent despite appearances to the contrary. The first class of anomalous representations is the set of those inferences that concern borderline cases — say, the rational consequences of classifying a hue of turquoise as green or non-green. Paradoxes form the second class of anomalous representations, e.g., the rational consequences of asserting “It’s raining, but I don’t believe it”. Despite being anomalous, both kinds of phenomena involve passages of judgment which are rightly called inference — or so I assume. In other words, at least some cases of anomalous inference are fallacious but are expressions of a capacity to infer all the same, pace Koziolek (2017). So, when we reason with borderline cases, we are not engaging in mere associations between propositions — we are engaging in bona fide inference.
Strategically, it makes sense to tackle borderline cases and paradoxes together, since the two forms of anomalous representation are sometimes intertwined. The sorites paradox has necessary interconnections with worries over vagueness and borderline cases, for instance. (Graff 2000) More to the point, though, some inferences which concern propositions over borderline cases can be characterized as Moorean paradoxes, at least by one formulation. (Hlobil 2014, 420)
Even though paradoxes and borderline cases are two different syndromes, I think they can be given the same diagnosis. For I hold that what is distinctive of both paradoxes and borderline cases is that they involve evaluations of cases with mixed proprieties, i.e., where inference seems both good and bad, or where the inference seems just mostly bad. Yet anomalous inferences remain bonafide inferences. So, the idea that all inference is good inference needs to be complicated through interrogation.
I will assume, in this essay, that the relevant classes of paradoxes have mixed proprieties. I take it that this assumption is one that is defensible under scrutiny. Recall that Quine distinguished between veridical paradoxes (e.g., which can be resolved under analysis into strange truths), falsidical ones (which can be resolved under analysis into fallacies), and antinomies (e.g., genuine crises of thought, like contradictions that force our assent). We can redescribe them in our favored vocabulary, where each is not apparently worthy of assent on-balance, but where the settled truth-value and actual worth of assent differs. (Quine 1976)
The aim of this paper is to demonstrate that the quasi-goodness-making approach to inference can make sense of two sorts of anomalous inferences — borderline cases and the Moorean paradoxes — just in case we assume the proper functioning of two specific kinds of background capacities, related to the integration of information during categorization, and norms of disclosure which govern the conditions for assent in public and private contexts.
Critical Exegesis
I think the quasi-goodness account is able to make sense of anomalous inferences, but only so long as we adopt a few additional provisos about the cognitive environments in which passages of judgment can be properly considered to be inferences. The argument proceeds as follows.
I believe that inference is a cognitive phenomenon, in a suitably robust sense — what I shall call a ‘cognitive theory’ of inference. By ‘cognitive theories’, I mean the proposal that ‘inference’ is best explained in terms of the psychological proprieties of passing from one judgment to another. As I am using the term, a cognitive theory of inference holds that there really is such a phenomenon as inference, understood as an ineradicable part of epistemic practices that are productive in inquiry, and that the phenomenon is located “in the head”. On this view, for inferences to be real cognitive activities, there must be a sense in which all inference is worthy of conditional assent, and that worthiness must be articulated in terms of faculties of mind. Cognitive theories partly contrast with, and partly complement, metaphysical theories of inference, which hold that the appropriateness of inference adheres to facts about publicly accessible states in the world, be they Platonic or Fregean. For my purposes, cognitive theories are antagonistic towards those accounts which regard inferential propriety in terms of purely arbitrary postulates. And, most importantly for present purposes, cognitive theories actively resist a certain kind of formalist theory of inference, according to which there is nothing to inference apart from rules that are stipulated.
One way of establishing the productive role of inference in the process of inquiry would be to suggest that inferring invokes rules which guide the appropriate passage of judgments to conclusions. Such a recipe would involve two ingredients: first, the assumption of conceptual models which specify the rules that articulate our powers to pass from some sentences to others; and second, a means of sorting better conceptual models from worse ones. We could then say that those conceptual models which are not good, are not empowered to confer conditional entitlements.
For an example of a near-miss case, consider the ‘tonk’ rule. According to the tonk rule, the following would count as legitimate passages of thought: (1) ‘P; therefore, P tonk Q’, and (2) ‘P tonk Q; therefore, Q’. If we permitted the tonk rule into our everyday conceptual model, it would have the unhappy consequence of permitting an inference from any P to any Q, rendering the scheme trivial. Suffice it to say, few of us think that ‘tonk’ is a good or legitimate inference rule. (Prior 1960; Belnap 1962) So on the assumption that every inference is a good inference, ‘tonk’ must not be a rule of inference at all, strictly speaking — not if you are a cognitive theorist.
This view will not be obvious to everyone. ‘Post-inferentialism’ is a phrase I shall use to refer to a species of argument that holds the view that all rationality is, at base, a matter of mere legislation. Positive adherents of this view would assert that criteria for logical validity specifically (and proprieties of inference more generally) rest on fiat, either as a function of social norms or formal stipulation. We should include in this view any who might deny that any putative answers to external questions are relevant to the status of rules within a linguistic framework as rules of inference. In consequence, since tonk can occur as a well-formed formula in some conceptual model or other, it can be considered an inference (in that model).
In his classic (1895) paper, Lewis Caroll recounts the tale of Achilles and the Tortoise. In it, the Tortoise challenges Achilles to demonstrate and explain the logical power of modus ponens. But in every case where conjoined and commonly accepted premises lead to a conclusion by way of modus ponens, the Tortoise still asks: why should we be forced to accept the truth of the conclusion, given the truth of the premises? Ever helpful, Achilles invokes another iteration of modus ponens as a premise to explain the grounds of the pattern of reasoning and which brings about the conclusion. But the Tortoise responds by ostensibly accepting the new invocations of the rule, yet still doubts that he is obliged to accept the conclusion. So Achilles provides yet another modus ponens, and the conversation turns into an infinite regress.
Here is where the post-inferentialist may intervene. Perhaps the whole problem with the dialectic is that Achilles bothered to provide the Tortoise with any reply in the first place. What the Tortoise failed to understand, on this view, is that once modus ponens has been invoked, and held as true, then there is nothing left for us to explain about its capacity to bring about the conclusion. For the force of a rule of inference is nothing other than a decision imposed by fiat. Attempts at further explanation or argument in favor of those decisions are feckless. Hence, by trying to motivate the idea that inference has a rational force, the Tortoise has shown himself to be confused about the nature of inference. And since the cognitive view of inference asks for further arguments, they must share Achilles’s fate.
Paul Boghossian argues that inference necessarily involves taking one’s premises as good reasons to come to a conclusion. On his view, every movement from premises to conclusion is conceived as adequately supporting a conclusion, involving a kind of contentful attitude towards the movement from premise to conclusion. So, if I infer Q from P, then it owes to the fact that my judgment that P is true provides rational support for believing that Q is true. Boghossian’s idea is that inference from premise to conclusion implies that the inferring agent believes that the presumed correctness of the premises provides at least prime facie epistemic support for belief in the conclusion. That is the “taking condition”, and it is his attempt at characterizing the core constituent features of inferring. (Boghossian 2014: 4-5) Consistent with this view, Boghossian believes theoretical inference involves voluntary cognition. (Boghossian 2014, 2)
For the most-part, Boghossian self-consciously limits his account of inference to an account of theoretical reasoning, whose point is to develop, extend, and limit prior beliefs. During the course of his essay, Boghossian intentionally avoids treatment of the project of understanding practical reasoning, i.e., inferences which lead to a change in plans or intentions. (Boghossian 2014: 2) However, his account of theoretical inference is in fact grounded in an assumption that we have a sufficiently robust prior understanding of practical rule-following, which must be treated as an unanalyzable primitive. (Boghossian 2014, 17) While he aims to only explain theoretical inference and tables the question of practical inference, the phenomena are related.
Fundamentally, Boghossian’s proposal is that we should attempt to explain inference on the model of rule-following. (Boghossian 2014, 11-16) The idea is that every instance of following a rule is also an instance of taking some triggering conditions as a reason to execute some behavior — if that is right, then we might say that all inference is rule-following of some kind. (Boghossian 2014, 13-14) Note that the reliance on rule-following as an explanation of inference has significant intermediate steps. For Boghossian only arrives at the rule-following model of inference because it seems to him that it is the strongest candidate explanation for the taking condition; his main aim is to show that the taking condition is a necessary condition for accounts of inference. So, if it were to turn out that some inferences did not involve rule-following, then while we might find doubts cast on the explanation of theoretical inferences to rule-following, we would not thereby have reason to abandon the taking condition.
I am dissatisfied with the taking condition as an account of inference. We might worry that the decision to give an account of theoretical reasoning, as opposed to an account of inference as such, is to artificially restrict our attention in a way that disappoints our hope that we were being given an account of inference wholesale. My suspicion is that any decision to investigate inference as the management of beliefs would effectively distort our description of a unified cognitive phenomenon. Indeed, we might have those worries even if we are otherwise sympathetic to the idea that the taking condition should be part of an account of inference. (Broome 2014, 19)
Another counter-argument to Boghossian’s proposal is that he focuses on taking or registering inference prospectively, to the detriment of retroactive inference assignment. For example, suppose that reasoning in natural deduction is a form of theoretical inference, involving something quite like a reasoned change in view. When constructing a proof using natural deduction, we must artfully select a good strategy when using formal rules to construct a successful proof. Yet we might doubt that there must always be verbalizable rules behind the selection process when they are happening. At best, at least from the initial point of view, there are intuitive and educated policies made in accordance with stable heuristics, that are not in advance believed to satisfy the taking condition. Yet these moves might count as inferences retroactively as we go about thinking about how those policies did yield success. While the future apprehension of the success of some strategy might count as ‘taking’ — falling into the trap of over-intellectualization (McHugh et al. 2016) — the initial passage of thought would still qualify as inference in spite of it not being taken as such at the time.
Crispin Wright sees related faults with the taking condition as an account of theoretical inference. First, he worries that the taking condition presumes that one cannot infer “from a supposition to its consequences” (e.g., drawing out the logical consequences of claims that one takes to be false), and second, that it makes no sense of cases of “reasoning to the judgement of a conclusion in a way that discharges all other premises, judged true or no” (e.g., during an assumption for conditional proof, or a reductio). (Wright 2014, 28-9) On his view, these faults are significant enough to oblige us to consider an alternative account.
Perhaps, Wright argues, theoretical inference does not always involve your taking your premises as reasons – perhaps, instead, the movement from premises to conclusions can be satisfied by the mere success of the reasons you entertain in bringing about the conclusion properly, regardless of whether you take them to count as reasons. For Wright, the distinctiveness of the character of the movement from premises to conclusion is just in the fact there is a correct kind of reason-bearing relationship between premises and conclusions, regardless of whether the thinker registers the inference to be sure-footed. (Wright 2014, 33-36)
For Wright, the criterion for inference is something like action success, not the taking condition. For an example, he provides the example of the “anti-akrasia” norm: that, provided there is no countervailing consideration, one shall do what they believe will satisfy their desires. (Wright 2014, 33) Ostensibly, we do not need to take the anti-akrasia rule as a reason to act on suitable belief-desire pairs. But we normally function in accordance with these belief-desire pairs all the same, as that norm is a formal requirement for being an agent. And there is little doubt that, normally, these pairs dispose people to assent to the practical conclusion.
Wright claims that there is no analogue between the anti-akrasia norm and the taking condition since there is no mental registration of the belief-desire pair as giving sufficient reason for the action. So, the anti-akrasia norm seems to be a case of rule-following in a very special sense: it involves assent to a conclusion without registering assent to the rule.
Still, that is enough for us to consider the form of the anti-akrasia norm, and to ask whether a suitable analogue can be found. For any (B)eliefs, (D)esires with respect to some possible (a)ction, let @ be a connective describing the anti-akrasia norm, and superscript-T represent truth. The following would be proper inferences, considering that norm. In ordinary prose, the introduction rule reads: “Suppose there is a belief and there is a desire with respect to some action; therefore, it follows that belief and desire pair defeasibly recommend that action as the thing to be done.” The elimination rule reads: “Suppose, some belief and desire pair defeasibly recommend an action; therefore, it follows that that action is the thing to be done.” The introduction rule follows directly from Wright’s characterization. The elimination rule follows from a correct understanding of anti-akrasia: that, for any obligation, it normally shall reflect a prior belief-desire pair, at least insofar as it reflects a strong will.
Thus, a follower of Wright is free to depart from Boghossian’s approach to the kinds problem by including other non-voluntary cognitive processes: e.g., of unsophisticated agents, children, clever dogs, and the like. (Wright 2014, 34) There is no need to claim that the taking of premises as reasons as sufficient to arrive at a conclusion is necessary for inference. There is, at best, a sense that thoughts are in an orderly sequence, but this is no help. (Wright 2014, 29)
Wright seems to think that the structure of the anti-akrasia norm arises from our prior assumption of agency, which already implies a certain formal structure working in the background of cognition. It is part of the nature of agency that it takes a belief-desire pair and outputs the thing to be done. The norm specifies a binding relation between premises and conclusions without reference to the taking condition and substitutes a normal capacity for agency in its stead.
Unfortunately, modeling inference on action success may exacerbate disagreements over the kinds problem beyond tolerable levels by resulting in a situation where we are obliged to claim that some bad passages of judgment are legitimate. Consider, for instance, the fact that the analogue of the anti-akrasia norm to theoretical reasoning would not be an inference, so much as it would be a version of the I- and E-rules for tonk. Famously, tonk is thought to be an illegitimate logical operator because its E- and I-rules are not in a state of harmony since the elimination rule cannot be grounded in the introduction rule. (Dummett 1991, 246-247, 286-290) Yet the anti-akrasia norm shares that defect. Moreover, tonk is even more worthy of assent than the anti-akrasia norm, since tonk is at least truth-preserving. So it is unclear why the anti-akrasia norm should fare any better.
One might object that the anti-akrasia norm was never meant to be a logical connective in the first place, and so the analogy between @ and tonk is irrelevant to the question of whether the anti-akrasia norm qualifies as inference. Indeed, one might insist on a radical separation of the study of formal implication and the study of inference. (Harman 1986, 3-10) A fair point. Still, it seems to me that formal inference ought to be treated as an ideal site for the study of inference in general. If that is not a fool’s errand, we should expect to find assumptions underlying formal reasoning that are structurally analogous to those deployed in informal reasoning.
Wright agrees that the success-account potentially overgeneralizes, in that it apparently permits fallacious but attractive patterns of reasoning (e.g., denying the antecedent), but contends that this is not a decisive counter-consideration. (Wright 2014, 37) Unfortunately, he does not give us much reason to block the analogy between the anti-akrasia norm and seductive fallacies. Worse, given the analogy between tonk and @, one might even conclude that the anti-akrasia norm is not inference.
In short, Wright’s analysis of the anti-akrasia norm has not yet provided sufficient explanation of what makes a successful inference. It just is what it is: from the beliefs and desires of agents you normally get the thing to be done for free. By analogy to formal models in theoretical reason, from the truth of P, you get Q for free. And that seems insufficient since it conflates regularities with rules. To the extent that we consider arguments from an analogy between tonk and the anti-akrasia norm to be grounds for doubt, then perhaps it is also reason to doubt Wright’s account differs in any important sense from fiat.
To defend the success condition, we would have to make good on Wright’s assertion that the success theory could be bolstered “if we knew what to say about what it is to act for certain specific reasons”. (Wright 2014, 37) I am not sure what specificity requires, except perhaps a theory of reasons-individuation in practical action. If that is a subject of a general theory of inference, then it must involve reference to the assumptions that underlie our competence in generating such minute specifications, where competence is grounded in some non-voluntary capacities related to the constitution of agency.
So here is, perhaps, a strategy for making good on his proposal. Take it as read that some very silly passages of thought (e.g., tonk) and seductive fallacies (e.g., denying the antecedent) are normally successful in reaching a conclusion from some premises, but add that their normal success fails to pass some minimal threshold of significance which is required to secure a binding relation. The maneuver, here, would be only to observe that the passage of thought in tonk might be good in some sense, in that it participates in a rule, while not being good enough to qualify as inference.10 And then a story could be told about how the anti-akrasia norm is dissimilar because it is normally successful in a way that does pass that threshold of significance, e.g., owing to the background capacity.
To summarize. It is not especially clear to me that inferences are very well explained in terms of ‘taking’ a conclusion from premises because sometimes the registration of a passage of judgment as good only happens in retrospect. And it is not obvious to me that we can appeal to action-success without appealing to prior capacities in a more robust way than ‘orderly passages of judgment’. We need an account in terms of cognitive processes which are at least sometimes successful because they register success in retrospect or from an outsider’s point of view.
However, both accounts do get something right, in that both accounts hold that inference involves the correct passage from one representation to another according to some potentially entitlement-conferring normative property which can be accessible to someone or other. In that sense, every rational inference is a good inference under some description or point of view. The question is whether the agent must consciously antecedently register that sense of propriety for it to hold. The key to knowing how to proceed will depend on how it is we are meant to understand any attribution of ‘goodness’ to inference at all.
Consider a version of Moore’s paradox: “P; but I don’t know that p”. That expression seems odd. Now, consider that there is evidently an air of paradox around what we might call a Moorean meta-inference: “Q from P, but I don’t think the inference from P to Q is a good one”. That expression seems even odder.
Ulf Hlobil argues that Moore’s meta-inference and its analogs are paradigms of failed inference, and for that reason need to be explained by an account of the practice of inference. For Hlobil, the sheer obviousness of the Moorean moratorium on gainsaying one’s own assertions (and beliefs in thought) is a thing that stands in need of explanation. Whatever salient cognitive processes we baptize as ‘inferring’ should be in some way sensitive to the paradoxical features of analogs of Moore’s paradox.
Hlobil argues that this prohibition seems to pervade the process of inferring in general, which I will call the “Moorean Moratorium” or “Moorean Constraint”):
It is either impossible or seriously irrational to infer P from Q and to judge, at the same time, that the inference from Q to P is not a good inference. (Hlobil 2014, 420)
Moore’s moratorium is supposed to be a constraint on inference wholesale. The lesson we are meant to get from the Moorean meta-inference is that the practice of inferring inherently implies an attribution of propriety to the passage of the relevant semantic information. Left out of the question are considerations like whether the assignment of propriety is voluntary or involuntary, first-personal, etc. But what such attributions have in common seems to be a sense that, in asserting that you infer Q from P, and also asserting that you do not believe that Q follows from P, you have both assented and dissented from one and the same inference. And this is intolerable since both the first-order assertion and the second-order assertion of a belief share the same assent conditions. (Shoemaker 1994, 215)
Hlobil argues that the Moorean meta-inference demonstrates something constitutively important about the nature of inference, namely that someone cannot infer Q from P without also judging that inference is appropriate to make. When you try, you end up saying intuitively strange things, like: “Q from P, but I don’t believe that Q from P”. And, Hlobil argues, this makes a difference to the dialectic between the taking condition and the success theory, because neither account explains the perniciousness of analogs of the Moorean meta-inference.
For Hlobil, the advocate of the taking condition can only explain the centrality of Moore’s meta-inference to accounts of inference if they accept that the notion of rule-following implies that it is irrational or impossible to follow a rule in a way that one takes to be incorrect. However, it is crucial to Boghossian’s account that rule-following be treated as an unanalyzable primitive (i.e., not analyzable in terms of inference), else it risks descending into an infinite regress. (Boghossian 2014, 14) Yet, if we think it is rationally impossible to follow a rule in a way that we take to be incorrect, then we are not treating rule-following as an unanalyzable primitive – instead, we are noticing that propriety is a constituent feature of rule-following. Hence, Hlobil argues, the advocate of the taking condition account is stuck in a situation where the Moorean meta-inference has been left unexplained, and by implication, where inference has not been explained. (Hlobil 2014, 425)
The advocate of the success account argues that inference satisfies the right kind of reasons-relation to action. But in his analysis of Wright, Hlobil argues that Wright’s account must posit that it is irrational or impossible for an agent to act for defective reasons: that, anyway, is implied by the anti-akrasia norm. But if that is the case, then it must either leave the origins of Moore’s constraint unexplained or must explain it in terms of an analog of Moore’s paradox in acting-for-reasons. (Hlobil 2014, 428-9)
The Moorean proposal is an interesting one, and there is a great deal that it can teach us about the nature of inference. It also helps in clearly focusing our attention on what is perhaps the most salient feature of inference: the fact that it is thinly normative, in the sense that it involves the registration of some passage of thought as appropriate. And sure enough, in the final analysis, Moore’s paradoxes usually seem to be paradigms of failed inference. What I would like to establish is that the force of these paradoxes must be shown through argument, not taken as a given. For paradoxes are anomalous representations under some description, i.e., being cases of inferences (un)worthy of assent which nevertheless compel the contrary. Thus demonstrating that some good inference is also bad.
Argument
How does the quasi-goodness account of inference handle anomalous inferences? I suggest there are two potential drawbacks to the suggestion that all inference is good inference under a description. I call them ‘the worry about proportions’ (directed towards inference in general) and ‘the worry about contextual entitlements’ (towards theoretical inference in particular, i.e., from belief to belief). I also propose potential solutions in terms of the cognitive processes that might be involved in the passages of premises to conclusions.
Two depictions of goodness
In the first instance, we might worry about whether the propriety (goodness) of inference implies the non-existence of any sense of impropriety. When one demands that we diagnose all inferences as potentially good, that may be read in two ways: as either a rather ambitious claim that says:
(1) that we are obligated to diagnose the inferences as, on the face of it, not bad; or, rather, that inference is not at all worthy of assent,
or a mere austere claim that says that:
(2) we are obligated to withhold some attribution of goodness to the inference (read: inference is at least somewhat worthy of assent).
The first view reads inference as exclusively good, according to the relevant (and apparently elusive) measure; the second, as all-other-things-equal good.
Borderline cases and the Room Conundrum
My first inclination is to say that (2) is a plausible observation. However, I also think we should decisively reject (1) since it seems clear that we can make rational inferences that are both worthy of assent and unworthy of assent at once: namely, inferences concerning borderline cases. Borderline cases just are those cases where we are doggedly unable to decide how a term applies in some set of cases, and as a result, where we are tempted to both affirm and deny a particular description of those cases. (Horwich 1998, 78-82). For example, when a man is standing on the threshold of a room, we may suppose that “The man is in the room” and “The man is not in the room” are independently apt descriptions which refer to the very same state of affairs, and suppose that conjunction introduction is a perfectly sensible rule of inference, though we might consider it rationally inappropriate to assert any inference that conjoins these two propositions with imprecise descriptions. In short, let Q be the conjunction of “The man is in the room” and its negation, and let P be decomposed into the statement of “The man is in the room” with its negation. Now notice that such cases are clear analogs to Moorean meta-inference under the proffered description: the diagnostician may say ‘Q from P, but I don’t think the inference of Q from P is a good one’. “The man is in the room” and “The man is not in the room” correctly entail that “The man is in the room and not in the room”, but this inference is in some important sense not a good one, at least for those of us who take bivalence seriously. Call this the ‘Room Conundrum’.
To be sure, on some accounts of vagueness, it may seem as though this thought-experiment is untenable. So, e.g., one might claim that all borderline cases are false or unassertable; or one might assert that they demand a shift in context. Advocates of the latter, contextualist claim might find something of use in the next section, in the discussion of one particular kind of contextual ascription. To those who think that borderline cases cannot be rationally assented to or asserted, I have little to say, except to confess that their position seems both wrong and badly motivated. I will only say that, if they would like to avoid a stipulative theory of inference, then the burden is on them to make the argument through some combination of ethnographic evidence and critical analysis.
Now, we can read that meta-inference in either sense (1) or sense (2). If we followed the Moorean moratorium in sense (1), then the Room Conundrum clearly cannot be an inference, since it involves the attribution of some impropriety, which is barred. The process of arriving at Q from P is at least somewhat bad, but it antecedently appears to be an inference all the same. Thus, (1) does not look very attractive as a constraint on inference wholesale, because the existence of the Room Conundrum falsifies that proposal. If there is a clear case of inference that is both good and bad, then we cannot expect all inferences to be not bad; because, indeed, this inference is somewhat bad, in that it partly resists assent. (1) fails to save the phenomenon at issue.
Alternately, if we followed the Moorean moratorium in sense (2), we would be left haggling over whether the sense of rational propriety that holds for the conjunction has passed some threshold of significance, such that the passage from premises to conclusions really can be thought to be appropriate in the final analysis. For we know that a merely good passage of thought is not good enough to qualify it as an inference. Instead, we would have to be comfortable in saying that, insofar as the Room Conundrum involves inference, it is on balance more good than bad. But that would mean Moore’s moratorium rests on contentious interpretations of propriety – which would mean that the attribution of goodness to inference is not an obvious truism, but a controversial one which requires further intellectual motivation. It would require some elaboration on the capacity which allowed us to see that propriety outweighs impropriety for the passage of one statement or class of statements to another, within some context.
Intuition is not enough
Perhaps we might evaluate the right balance between goodness and badness of inference by appealing to an intuitive sense of their absurdity. For Quine, the status of the claim as a paradox is related to a kind of negative intuition we register about the statement, in the form of its surprising features. (Quine 1976) Alternately, one might read rational intuitions as apprehensions of immediate implication. (Harman 1986, 19-20) Yet the immediate problem with appeals to intuition is that they are both ‘regress-stoppers’, and opaque in their contents. (Bealer 1998; Nagel 2012) Take those two things together and we have replaced fiat with mysticism, which is even worse.
Still, these difficulties might be resolvable. For instance, we can tell a philosophical story about how the relevant kind of rational ‘intuition’ is best understood as an occurrent state that is the product of a reason-generating process, resting on a capacity for synthesis of disparate non-rational information into discrete judgments. (e.g., Anderson 2014) This account would substitute for Boghossian’s attempt to carve out a space for “System 1.5” processing. (Boghossian 2014; based on Kahneman 2011) Though in that case, we might very well conclude that ‘intuition’ is not doing the explanatory work.
Inference in general aims for reliable categorization
A more promising approach is to directly model inference on that assumed capacity for information integration during categorization. The essential insight is that, when we infer q from p, the goodness of that inference shall be fully dependent upon what we say counts as a p, and what we say counts as a q. And the process of ‘counting as’ is, at base, the process of categorization. Inference simpliciter involves the passage of one judgment to another, on the assumption of apt categorization of constituent parts, and aims at establishing stable relationships between judgments (while implication is a special kind of inference that is truth-preserving and explains strict entailments). On this view, inference emerges as a natural consequence out of facts about how we interpret information by assigning objects into categories.
This might sound like a series of non-sequiturs at the outset, but should not. Consider the syllogism: “Socrates is a man; All men are mortal; therefore, Socrates is mortal.” What grounds the conclusion? Limited to just this special case, a full story will need to make reference to various facts which concern truth-preservation, soundness, and logical validity. But we might take a step further back. To me, intuitively, it seems as though the thing that makes each judgment adequate in isolation, is the very same thing that makes the conclusion adequate in light of the premises that precede it — namely, the adequacy of categorization. The successful inference to the conclusion emerges as a product of a chain of judgments, which are in turn non-trivially dependent upon a cataloguing exercise. Inferential grounding is partly grounded in satisfaction, which is a cognitive process — the process of categorization.
At the outset, it must be stressed the categorization is not in itself necessarily stable or reliable. Categorization is, after all, a form of interpretation. The Room Conundrum — much like Heraclitus’s river and Goodman’s grue — is a case of categorization of objects and their relations in judgment. This is just to say that categorization is a dynamic process which can be done ‘on the fly’. So, on the dynamic construal account of categorization, advocated by Croft and Cruse (2004), categorization is dynamic in the sense that objects are placed under categories on an ‘on-line’ basis, and resulting in boundaries that are potentially open to non-trivial revision. And it is integrative, in the sense that lexical and propositional meaning is defeasibly but reliably primed to promote unification and stability of senses, despite the fact that conceptual boundaries are interpreted ‘on-line’. (Cruse 2000) Their account is situated in the research tradition that treats categorization as a process. (e.g., see Bruner et al. 1956)
Since the dynamic construal account is a process, it implies at three stages:
* First, the reasoner considers the nature of their categorization task. Using the language of Croft and Cruse, this involves priming the raw objects of potential judgment into working memory, known as the purport of a judgment. Part of this stage demands a tacit comprehension of the demands of the categorization task, if there be any explicit task in mind — for example, whether the aim of judgment is to maximize the qualitative intelligibility of an object in terms of applicable concepts, or the aim of judgment is to achieve certainty when placing an object under its concept. The purport acts as the input into the process of categorization. (By default, the task of categorization is to match attributes of potential objects of judgment in order to recognize the salient pattern, on the presumption that unsophisticated reasoners can engage in this sort of pattern-matching exercise.)
* Second, we impose constraints on the selection criteria for these objects of potential judgment, in light of the tacit or explicit aims of judgment. The constraints can be ‘soft’ or ‘hard’, yielding relatively strict boundaries or porous ones, depending on the demands of the task and the immediate salience of the attributes. The cognitive and epistemic values that are involved in the judgment task are crucial at this stage — e.g., if simplicity or perspicuity are prized highly, then it will be necessary to construe the concept in terms of hard constraints. Some of the tradeoffs at issue depend on the degree of cognitive effort one is willing to expend on maintaining stable category boundaries: the harder the constraints, the higher the cognitive costs, and the stricter the boundaries. (Croft et al. 2004, 92-105) This stage is at the core of the categorization process.
* Once categorization is complete, the concept is assigned particular kinds of rules for implication, and hence for inference, depending on the strength of the chosen conditions for satisfaction. Tight-and-tidy concepts take on classical or formal conceptual structures, while messy ‘family resemblance’ style concepts lend themselves towards graded conceptual structure. The provisional conceptual structure is then reflectively reconsidered, or validated, in the third and final stage. At this stage, the aim is to prove that the provisional categorization task has met some kind of epistemically productive success, canonically expressed in terms of formal schema for implications. (This is sometimes done through choice of formal rules for implication, depending on a logical system that makes the most sense of the categorization.) After the third stage, the reasoner can reconsider their category judgments on an ongoing basis in order to determine whether or not they have achieved the demands of the task, or they may reconsider whether the values they chose are fitting to the requirements of the context that give rise to the task, full well knowing that the results of categorization will serve as inputs into future categorizations.
Explicating the kinds problem in terms of the sortal problem, we can then model different kinds of inference as idealizations of this process:
1.a. Determine task demands | 1.b. Set cognitive / epistemic values (given 1.a.) | 2.a. Set allowances and liabilities (given 1) | 2.b. Determine the minimal categorization requirements (given 2.a.) | 3.a. Choose validation procedure (given 2) | 3.b. Set contextual presumptions (given 3.a.) |
Qualitative intelligibility of P(s) | Similarity; (directed) relevance | Infer common properties from common membership in graded categories | Graded or neoclassical categorization (depending on direction) | Analogy
P(a,b,c) |
|
Likelihood of future P(s) | Predictability | Infer sufficiency of conclusion from sufficiency of premises | Strict singleton | Induction
Pa(t^1) |
|
Plausibility of P(s) | Explanatory success; familiarity; ease of processing | Infer sufficiency from premises that are insufficiently necessary parts of an unnecessary but cogently sufficient model | Plural category with neoclassical boundaries | Abduction
E suggests P(a) |
|
Certainty of P(s) | Simplicity/perspicuity; completeness | Jointly necessary and sufficient premises yield a necessary and sufficient conclusion | Plural category with strict boundaries | Classical deduction
S(a)->All M(x) |
|
Two points are worth making about the process. First, every step of the process is sensitive to evaluative-functions, meaning that categorization occurs in a way that is sensitive to the requirements of the task. As a result, categorization of initial inputs can result in wildly different outputs, depending on a diverse plurality of cognitive, pragmatic, and epistemic values (or strategies): e.g., informativeness, certainty, economy, and error mitigation. (Bruner et al. 1956) In a manner of speaking, this process involves a kind of quality-assurance checking. Second, the process is iterated, so the results of categorization are inevitably fed back in as inputs into the process, potentially producing revisions of category boundaries.
Good inference has inductive potential
That is the gist of the stepwise process, but still it is worth making a few additional higher-order observations. On this picture, the goodness of inference is dynamic (because categorization has variable boundaries at the margins), procedural (because it proceeds step-wise), and prone towards equilibrium (since the results of previous stages enter into subsquent ones).
The dynamism of the process permits some latitude in how we understand the Room Conundrum, and what the proper philosophical treatment ought to be. On the one hand, if the Room Conundrum were thought to be a case of ultimate epistemological indefiniteness — that is, a case where we are tempted to say of both expressions are simultaneously true and false — it might owe to a relative lack of investment in concepts required for in strict categorization. (Horwich 1998; Millgram 2009; Croft et al. 2004, 82-89) On other other hand, we could think that there are unusual evaluation-functions in play when thinking about the conjuncts (e.g., if we were to treat the values of economy and simplicity with indifference), resulting in different contextual attributions at the validation stage. (Quine 1976)
This model of categorization is procedural, in the sense that it involves stages which occur in a series. However, close scrutiny will not yield clear predictions and hypotheses concerning the causal mechanisms that are attributed to a theory of conceptual processing that you might expect from cognitive psychology. So, for instance, the theory-theory, exemplar theory, and prototype theory are all viable instantiations of the dynamic construal account. (Machery 2009; Margolis et al. 1999)
Through progressive application of the process over multiple iterations, a subject can be expected to come to some studied convictions on how to properly handle the categorization of objects in their assigned categories, along with accompanying ideas about how the categorization practices are meant to fit with neighboring categories. In cases of expected boundary instability (or “hysteresis”), as in the Room Conundrum, one strategy will be to associate categories only with selected contexts of application — at least, so long as there is a reason to do so. In short, regardless of our one-off strategic choices, in the long run we come to learn through iteration how to manage categorizations in a way that maximizes stability and minimizes the need for unexpected revisions.
It is from these facts about the process of categorization that we finally see the beginnings of an account of the goodness of inference. For in the long-run, we are able to develop strong linkages between objects of categories and others, insofar as they are applied to our judgments. And when these linkages ground passages from one thought to another, we call the associations ‘inferences’; when this occurs, it accounts for the apparent theory-like’ character of some of our abstract categorization practices. They are good, more or less, insofar as they are good enough to be presumed to have staying power; a quality we might call their ‘inductive potential’. And when they are truth-preserving, we call our inferences ‘implications’; the subject of rigorous argument and proof.
Where does the apparent ‘inductive potential’ of inference come from? The legitimacy of some process of integration depends on whether the steps have been followed, and in that respect is situated in time. So, in steps (1-2), the process of integration attends to objects under some evaluation. Then we observe that the very same evaluation-function may operate over a single referent with unambiguous objective perceptual features, and yet still create facially incoherent judgments. Thus we observe that the facially good sentence, “The man is in the room”, and the facially good sentence, “The man is not in the room”, are both inferred from the same stimuli; but then by step (3) the two sentences become apparently not-so-good when united in reference to a single state of affairs. In this case, the “backwards-looking” aspect of the process involved our revisitation of the initial categorization-judgment, where we may feel obliged to retroactively invent or discover some sense in which the predicates differ over the case which was not facially at issue at the outset; and yet, nevertheless, feeling an enduring trace of the initial, objectively appropriate judgment. And the result is a lingering sense of mixed proprieties.
Given these remarks, I would like to conclude that all inference must pretend to be good inference, and that the goodness of the inference lies in its inductive potential. That potential arises through the process of categorization, which is natively disposed to conform to certain epistemic values: economy, acceptability, and inductive integrity. The upshot is that, when we think about anomalous inferences — e.g., inferences concerning a Heraclitean landscape, where one cannot step in the same river twice, because nothing counts as the same river — we run the risk of losing our capacity to infer positive judgments. Where categorization fails for the purpose of integration that lasts, inference runs into rocky shoals.
Consequences to theories of inference
If that is essentially the right story to tell about inference, then we should expect to see it reflected in our cognitive habits. And, indeed we do. At the first stage, we should see a preference for enduring patterns which can be held in working memory. At the second stage, we should see priming effects on choices in categorization during inference over ambiguous cases, instead of selection at chance. (Higgins 1985) And, at the third stage, we should expect to observe some pre-theoretic attraction to the policy that we only admit conservative extensions of a language — putting tonk to rest. (Belnap 1962)
Because this account of inference is procedural and backwards-looking, it allows for retroactive assignment of goodness from a passage of premises to conclusions as inference, even when that passage was merely associative on first glance. For that reason, it differs from the ‘taking condition’ account, though accommodates the account as an imperfect or limiting case.
While all this talk about vagueness is geared towards inference in language, I do not mean to preclude the success condition or exclude cases of unsophisticated inference. For I do not mean anything especially voluntaristic by the notion of a “practice”. I would just as well think of a practice as a set of interconnected natural tricks which occur at the sub-personal level. The important thing is to trace the all-other-things-equal legitimacy of inference to the integrity of a practice working in the background, social or no. For example, some neuroscientists explain the basic sense of agency through the temporal binding of prior and passing states. (Frith 2005) If some of those states are ‘belief-like’ and others ‘desire-like’, then it might satisfy Wright’s anti-akrasia norm, while potentially also leaving open unsophisticated cases of practical inference.
Once we assume that the integrity inherent within our capacity for categorization is a distinctively important mechanism which directly moderates our interpretation of the passage of one judgment to another, the less surprised we will be by the mixed proprieties in anomalous inferences. How we react instinctively to the Room Conundrum will depend on our comfort with the use of vague descriptions concerning a context where brute facts are apparently not at issue, potentially motivating different philosophical treatments.
Limits
The account has limits. For the capacity for integration would also seem to permit the inclusion of seductive fallacies as varieties of inference, and in that sense would be no better off than the success theory. So the conclusions are fairly modest: the capacity for integration is necessary, but not sufficient, background condition when considering the registration of mixed proprieties. Still, at least some headway has been made into discovering one of the background capacities that presumptively lies behind the success of good inference.
Shared judgment and entitlement equality
Let us return to theoretical inference, and in particular, theories of implication. As creatures who participate in a shared linguistic community, we are capable of inference with propositions whose contents are in some sense public. In theoretical inference, unlike practical inference, the practice of “giving and taking of reasons” must be shareable. Hence, even when deliberating in good faith about questions concerning truth and matters of fact, we face a special burden: our reasoning must rest on informational common ground. (Clark 1996, 65-67) It follows that theoretical inference cannot require us to flout the trust between knowing agents, if that trust is understood as a risk-laden assumption of common ground. (Origgi 2004) We are required, in short, to adopt strategies for shared judgment.
One strategy would be to hold that we are obliged to use the same rulebook when ordering a passage of belief as the one we use in passages in assertion; that standards of trustworthiness in deliberation with others are the same set used in introspection. We now refer to that view as entitlement equality, the thesis that the standards of evaluation that govern entitlements in assertion are identical as those that govern belief. (Hawthorne et al. 2015; Williamson 2000; Lackey 2007) On that view, the right theory of inference ought to apply to both the goodness-making qualities of inferences in belief as well as assertion. It is of special interest to us, because if entitlement equality is correct, and there is a prohibition on asserting ‘If P then Q, but I don’t believe it’, then that would be sufficient reason to suppose that the Moorean moratorium works as a general constraint.
The Ironic Intuitionist and the context problem
As a matter of fact, Hlobil explicitly holds that the mode of representation for a passage of judgment mediates our sense of propriety of inference. (Hlobil 2014, 421) As an example, he notes that an intuitionist logician speaking to a classical logician may assert “Q therefore P” even while they privately question the quality of the inference from Q to P. Call this the case of the “Ironic Intuitionist”. Here we might say that the case of the ironic intuitionist straddles two contexts of assessment — a context that evaluates assertion in light of rules of entitlement that govern assertions, and a context which evaluates the assertion in light of rules that govern beliefs — and go on to say that, while the speech-context allows that the inference might be worthy of assent, the context of belief in speech does not.
Still, there are two serious worries with appealing to context when carving up the debate. The first difficulty is that the notion of ‘context’ is hopelessly ambiguous, functioning as a kind of placeholder concept which lacks clear prior parameters of application. Left in such shape, it will look like an ad hoc stipulation which allows a defender of the Moorean prohibition to potentially evade attempts at falsification in practice.
The apparent ambiguity might be excusable if the Moorean phenomenon were so utterly immune to rational doubt as to make falsification pointless; perhaps if pre-theoretic intuitions universally abhorred all analogues of the Moorean paradox, and there were full agreement about what even counts as a Moorean paradox. But if there were a consensus of that kind, it does not seem to extend to the published literature: for it is claimed that, on occasions like the Ironic Intuitionist, you can assert P without assenting to assertion of the belief. (Frances 2016) And, for what it’s worth, I am not compelled to say that the Ironic Intuitionist case involves a falsidical paradox as opposed to being merely odd truth. So the moratorium might sometimes be less plausible than it first appeared, though that should not be inferred that it is any less paradoxical for all that.
The second problem is that, so long as context is treated as a placeholder concept, we might reasonably suspect that the interpretation of the context is doing most of the work when we put ourselves to the task of deciding what counts as an inference that is worthy of assent. In other words, it just may turn out that the very nature of inference is contextual: that to infer just is to infer in a context of assessment, whatever that means. And so the goodness of inference could be grounded in fiat.
Special theories of context as a solution
That worry could be mooted in this case if we were to articulate a special theory of context: that is, an instruction manual which would tell us when it is appropriate to interpret some situation in terms of belief or assertion. By this, I mean a studied elaboration of the kinds of principles that would map a conceptual model onto a context as a function of what kind of common ground is worthy of being assumed, along with an articulation of what salient cues in a situation would lend credit to some contextual interpretation or other. This would be a special and not general theory, since it would not try to explain all context choices, but only to account for (or dispense with) potential differences between contexts of assertion and belief in the Ironic Intuitionist case and its analogues.
Three kinds of disclosure rules
For present purposes, the task of a special theory of context is to articulate the nature of the background assumption(s) that relate successful inference in speech to successful inference in belief, such that at least some of them do sometimes share conditions of assent. Here, we are not just looking to settle the question of entitlement (in)equality, but also to give an account which discloses when assent to inference in speech follows or departs from the rules that govern assent to inferences in belief. Theories of context that have some kind of explanatory potential in those respects involve what we shall call ‘disclosure rules’.
On first blush, disclosure rules come in at least three varieties. The first two disclosure rules — the “assertion-first” and “belief-first” theories — are both inclined to support entitlement equality. A third, the content theory, is inclined to treat the question of entitlement equality on a case-by-case basis.
The assertion-first theory holds that inference is essentially directed towards the management of speech-performances. On that view, linguistic assent governs mental assent, so if classical logic turned out to be the norm of asserted inference, then it shall follow that any worthy passages of belief will have to follow those same rules. I expect such approaches could be described as theories of linguistic meaning, in some broad sense, in that they are tasked with giving a right or best answer about what counts as proper rules of inference in speech owing to cooperative norms in communication. On this view, good theoretical inference is assertable inference.
Most plausibly, this view would involve the corresponding view that beliefs are constitutively tailor-made to track assertions and linguistic judgments of that kind. Of course, whenever one talks about ‘constitutive norms’, there is a reasonable threat of stipulation working in the background. But so long as they wear their explanatory aspirations on their sleeves, there are no grounds for us to worry about arbitrariness. For instance, in my view, if the assertion-first theorist were to emphasize the fact that our repository of felt attitudes towards judgments go far beyond beliefs, and that these are neglected by obsessive focus with folk states like belief, then that would lend the assertion-first theorist some explanatory power.
By way of contrast, the belief-first theory holds that inference is essentially directed towards the management of beliefs in a way that is primarily sensitive to the orientation that the thinker has towards passages of thought. On this view, there are rules for appropriate rational judgment, and norms of assertion either justified in terms of those norms of belief, or are irrelevant to the question. In the Ironic Intuitionist’s case, the belief-first theorist countenances a level of complexity in their beliefs which might override conventional proprieties of assertion. For them, the best case scenario would be that the Moorean meta-inference that is at issue in the case is a falsidical paradox, owing to the intuitionist’s decision to assert inference ironically. On this view, good theoretical inference is credence-preserving inference.
In the early modern era, this view was dominant; its point was to preserve warranted certitude. The view is less congenial to the modern ear. Still, in my view, the belief-first theory would be at its best when it bases its convictions on the apparent primacy of first-person intentionality to representations, and argue that public rules of inference are a species of derived intentionality. Coordination and convention apply over representations, but the first bearer of representation is in cognition, so there may be an understandable temptation is to put belief first in an account.
A third theory is rooted in concerns over propositional content, and treats entitlement equality as a contingent matter. I will use my own felt intuitions as an illustration. It seems to me that, in natural language, the pro-sentential predicate (“truth”) is ambiguous between judgments directed towards objective truth of a proposal, from judgments geared towards the redundant expression of approbation towards that proposal. It is reasonable enough to think, with Tarski, that the two separate construals of the pro-sentential predicate involve different, incommensurable inferential models; e.g., treating acceptability in terms of paraconsistency, and objective truth in terms of classical logic. On those grounds, when I interpret an inference in accordance with one predicate, my sense of ‘what follows’, and ‘what inference is good’, will be different from when I interpret it in the guise of the other predicate. In other words, I permit myself to reason with different kinds of expressions by appealing to different inferential models, depending on whether I judge it more appropriate to interpret the pro-sentential predicate as objective truth as opposed to mere approval. And, finally, assuming we do not demand the enforcement of an equality between belief and assertion (the assertion-first theory), we can tell a story about how the proper function of a belief is to track the inferential model that is implied in the approbationary predicate, while the proper function of assertion is generally to track the objective truth predicate. On this view, good theoretical inference is inference that is modulated in light of its venue.
(These are ideal-typical differences in theory, of course, and individual philosophers can mix and blend them together according to their own sense of the best story to tell. For instance, it is sometimes thought that classical logic is the logic of reality, as conceived by metaphysical realists, which is a contentful conviction; but it is equally often used to marshall support for an assertion-first theory.)
Now, on most occasions, the contents of the sentence imply conditions for assertion that are identical to the conditions for belief, preserving entitlement equality. But on some occasions, the appropriate management of beliefs looks as though it is different from the management of assertions, e.g., in Hlobil’s case of the Ironic Intuitionist, or Frances’s Ironic Nihilist. In these peculiar cases, the difference in management of such inferences will owe to different interpretations of the pro-sentential predicate in light of the contents of the proposal at issue, and whether or not the predicate suits the job of its vehicle for representation. So, on this view, whether I treat a token instance of the Moorean phenomenon as a strange truth or a falsidical paradox will depend on the ways that propositional contents interact with the vehicle in which the inference is represented.
Disclosure rules as strategies for social cognition
That is only to mention three possible candidates of ‘disclosure rules’. Each seems plausible to me, and I shall not argue for any particular rule-set on this occasion. Instead, my concern during this section has only been to provide examples or illustrations of the sorts of theories that purportedly make contexts of interpretation themselves worthy of assent, given that reasonable people can disagree about the best ways to go about contextual ascriptions, leading to differing opinions about entitlement (in)equality between belief and assertion.
What is conspicuous about all these accounts is that they rely on strategies for coordinating judgment, which is an especially sophisticated meta-cognitive feat. In the assertion-first theory, rationality is found in the giving and taking of reasons in a public space, which is projected upon our lives as individual knowers. In the belief-first theory, rationality is found in beliefs that properly follow from beliefs, and the proprieties of assertion depend upon the verdicts of the best mind in the room. In the content-first theory, the proprieties of assertion and belief depend upon the stakes attached to venues for inferring. Whether one story or other is correct is less relevant than the higher-order observation, which is that each of them are attempting to organize cognition using social coordination as a constraint. None of the theories occurs in isolation from either social engagement or cognitive processing; there are no lurking mysteries, no unexplained explainers. The conditions for assent and associated context description do not come for free, unless we mean to proceed by stipulation. And I do not see why we would want to, given the presence of other options.
The preceding remarks have two related consequences. First, assuming there are correct disclosure rules, those rules would assure us that it has to be at least sometimes irrational to assent to the assertion that an inference holds while you do not believe it holds. Second, any tacit endorsement of a contextual interpretation which is not motivated by disclosure rules is essentially defective in its treatment of inference, as demonstrated by the fact that many tokens of the Moorean paradox really are falsidical, when taken out of the context that motivates them and supplies the relevant background information that we grasp as reasonable communicators.
It is important, for my purpose here, to note that my claim is relatively modest. I only claim that contextual ascriptions are partly mediated by disclosure rules which govern the relation between thought and talk. So, one anonymous reviewer (rightly) argued, in response to this line of argument, that it is not clear why ‘thought’ and ‘talk’ contextual ascriptions are particularly relevant, since the urge to stipulate a difference in contexts will recur in any case where we would like to think something like “Not-not-P. Therefore, P. But double negation elimination is not valid,” irrespective of the mode of representation. It is my hope that all those other cases must either be explicable by appealing to cognitive capacities of some other sort, or be consigned to the dustbin. But my conclusions fall far short of a project with such a grand scope. What I have tried to do is provide a special theory of context, not a general one.
Objections
But perhaps I have it wrong.
Recall the tale of Achilles and the Tortoise, which we encountered earlier. The post-inferentialist could reasonably point out that the whole problem with the dialectic between the two protangonists is that Achilles bothered to provide the Tortoise with any reply in the first place. What the Tortoise failed to understand, on this view, is that once modus ponens has been invoked, and held as true, then there is nothing left for us to explain about its capacity to bring about the conclusion. For the force of a rule of inference is nothing other than a decision imposed by fiat. Attempts at further explanation or argument in favor of those decisions are feckless. Hence, by trying to motivate the idea that inference has a rational force, the Tortoise has shown himself to be confused about the nature of inference. And since I am asking for further arguments, I must share Achilles’s fate.
Moreover, the critic might point out that the answer we give to the Tortoise is, and must be, very much like tonk. I can imagine them reasoning as follows. One cannot explain the success of modus ponens by referring to still more iterations of modus ponens; that is disaster. What one must do is put one’s foot down and say that, at the most basic level of abstraction, nothing further can be said about the force of “(A) If P then Q; (B) P; therefore, (C) Q” apart from a stipulation that certain patterns of expression shall take the form of “A, therefore C”. On that view, failure to abide by the stipulation implies a failure to comprehend that, at base level, inference is a raw imperative. Like with the drill sergeant, in matters of logic, ours is not to question why.
Now, I agree that that critical response probably looks attractive on first blush for the case of Achilles and the Tortoise, since the parable concerned commonplace inferences, not anomalous ones. But I caution against use of the explanatory strategy for a wholesale account of the basing relation. For while I would consent to the view that fundamentally the Tortoise is guilty of misunderstanding something important about the nature of inference, and that the misunderstanding involves a failure to take ‘therefore’ as a decision or commitment to allow a passage of premises to conclusions, I do not agree that the nature of his misunderstanding amounts to a mere failure to apprehend the underlying stipulation. For, if we consider the anomalous inferences discussed above, we also notice that while they fit the stipulated form of “A, therefore C”, it is still entirely rational to ask why.
I expect that everyone I have discussed above would agree on this point. For all hands seem to think that inference functions against background presuppositions about the nature of the basing relation in the relevant cases. (Boghossian 2014:7) Fine enough. My constructive suggestion has been that, in the discussion of inferences in general, and anomalous inferences in particular, this basing relation requires background standards of judgment concerning the proportions of propriety in a passage of thought, along with background standards of judgment concerning the features of appropriate context attribution. Only after holding to these presuppositions is it rational for us to apprehend some passages of thought as instances of “A, therefore C”, and conditionally worthy of assent.
To be sure, the post-inferentialist could artificially force the issue by asking what it is that accounts for the relationship between integrity and disclosure and the basing-relation between passages of judgment. Their bet is that, in asking such questions, we will always arrive at answers that are good enough to be worthy of assent. But I would offer a different wager: that when these Tortoisian questions are asked in intellectual bad faith, the cumulative effect of successive iterations of the same misunderstanding is not to disprove that the presuppositions of integrity and disclosure are operating to establish the conclusion, but only to suggest that the Tortoise has made the decision to stop inferring. The regress ends, not because it is impossible or irrational to ask further questions, but because we cannot take advantage of the assumption that we have means of discovering the conditions of assent through charitable mental state attribution. Thus inference takes a holiday.
Conclusion
A successful theory of the nature of inference should resolve the sortal problem by way of a strategic examination of the kinds problem. Anomalous inferences — reasoning with borderline cases, reasoning with the Moorean phenomenon — are and ought to be central in our examination of the kinds problem. For we might expect a controversial area of inference to generate relevant falsifiers for the goodness-making theory, and it’s part of our job to interrogate the most promising account. A theory of inference should also be able to sharply distinguish anomalous inference from fully bad passages of thought. So, during the course of analysis, it seems antecedently reasonable to suppose that ‘tonk’, Heraclitus’s river, and Caroll’s parable should be regarded as involving processes that are so inappropriate as to not to qualify as inferences at all. Ideally, a theory of the nature of inference should also explain what is bad about the seductive fallacies; though I have not done so in the preceding argument.
Whether you agree with my examination or not, I would like to press the following methodological point. In carrying out a study of what makes inference worthy of assent, we cannot treat the “goodness” of inference as a mere platitude to be explained — that is not a reasonable way of going. Rather, if it is a substantial account, then the goodness of inference must also be put through its paces. Hence, if we take it for granted that Moorean paradoxes are essentially defective, we might miss the fact that anomalous inferences have mixed proprieties, which is a face-value challenge to the goodness-making account. My aim in the above essay has not been to deny the existence of the various paradoxes. My task has been only to deepen our understanding of what inferences with these paradoxes tell us about the nature of inference itself, with particular accent on the ways they function in cognition.
In this argument I relied on explanations of the apparent goodness of inference which bottom out in cognitive processes. I appealed to two kinds of background capacities which give rise to assumptions which made it possible to evaluate some complex passage of judgments. Those capacities were the capacities for integrated categorization and the capacity for strategic orientation towards shared judgment. Together, these skills mediate and complicate our sense of assent by generating a weighted judgment of proportionality in cases of divergent assent, i.e., cases of mixed proprieties. To be sure, there could be other explanations based on non-normative metaphysical postulates, cashing out the proprieties of inference in terms of reliable causal chains in information systems. I do not wish to exclude these theories; indeed, as I state at the outset, I sympathize with them, and think they have been unfairly dismissed. But I have only assumed they are not correct for the purposes of this essay, in order to sharpen the focus an already lengthy and knotted analysis.
Here is the point, whittled down to a nub. At the outset, I have assumed that inference aims at error-mitigation. So, at the end of the day, we have a pat formula. These three conditions — error-mitigation, integrative categorization, and strategic placements in common ground — are necessary to an explanation of theoretical inference. Meanwhile, error-mitigation and integrative categorization are necessary for a general account of inference. Given these facts, there is sufficient reason to think that we might arrive at a genuine explanation of the nature of inference beyond appeals to stipulation. Still, more work needs to be done to complete that project, and this account has only gone part of the way.
2. * * Bealer, G. (1998). “Intuition and the Autonomy of Philosophy” in Rethinking Intuition: the Psychology of Intuition and its Role in Philosophical Inquiry.eds. Michael DePaul & William Ramsey. New York: Rowman & Littlefield.
3. * * Belnap, N. (1962). “Tonk, Plonk and Plink.” Analysis. 22(6).
4. * * Boghossian, P.A. (2014). “What is inference?” Philosophical Studies. 169(1). doi:10.1007/s11098-012-9903-x.
5. * * Broome, J. (2014). “Comments on Boghossian.” Philosophical Studies. 169(1). doi:10.1007/s11098-012-9894-7.
6. * * Bruner, J., Jacqueline Goodnow, and George Austin. (1956) “The Process of Concept Attainment.” in A Study of Thinking. Transaction Publishers. 7. * * Caroll, L. (1895). “What the Tortoise said to Achilles.” M ind. 4(14).
8. * * Clark, H.H. (1996). Using Language. Cambridge:Cambridge University Press. Cook, R.T. (2005). “What’s Wrong With Tonk?” J ournal of Philosophical Logic. 34(2).
9. * * Croft, W. & D. A. Cruse. (2004). Cognitive Linguistics. Cambridge:Cambridge University Press.
10. * * Cruse, D.A. (2000) “Aspects of the micro-structure of word meaning.” In Yael Ravin & Claudia Leacock (eds.), Polysemy: Theoretical and Computational Approaches. Oxford:Oxford University Press. pp. 30–51. 11. * * Dretske, F. (1986). “Misrepresentation.” In Belief: Form, Content, and Function. ed. R. Bogden. Oxford:Oxford University Press.
12. * * Dummett, M. (1991). The Logical Basis of Metaphysics. Cambridge:Harvard University Press.
13. * * Dutilh Novaes, C. (2015). “A Dialogical, Multi-Agent Account of the Normativity of Logic.” Dialectica. 69(4).
14. * * Eliasmith, C. & Terrence C. Stewart, Xuan Choo, Trevor Bekolay, Travis DeWolf, Yichuan Tang, and Daniel Rasmussen. (2012). “A Large-Scale Model of the Functioning Brain.” Science. 338(6111)
15. * * Frances, B. (2016) “Rationally held ‘P, but I fully believe ~P and I am not equivocating’.” Philosophical Studies. 173(2).
16. * * Fricker, M. (2007). Epistemic Injustice: Power & the Ethics of Knowing. New York:Oxford.
17. * * Frith, C. (2005). “The self in action: Lessons from delusions of control.” Consciousness and Cognition. 14.
18. * * Goodman, N. (1979) Fact, Fiction, and Forecast. Cambridge:Harvard University Press.
19. * * Graff, D. (2000). “Shifting sands: an interest-relative theory of vagueness”, Philosophical Topics, 28: 45–81.
20. * * Grice, H. (1975). “Logic and conversation” in Syntax and semantics 3: Speech acts, ed. Cole et al. New York:Academic Press.
21. * * Haack, S. (1996). “Concern for Truth: What it Means, Why it Matters,” in The Flight from Science and Reason, eds. Paul R. Gross, Norman Levitt, and Martin W. Lewis. New York: New York Academy of Sciences.
22. * * Harman, G. (1986) Change in View. Cambridge:MIT Press.
23. * * Hawthorne, J., & Daniel Rothschild and Levi Spectre. (2016) “Belief is weak.” Philosophical Studies. 173(5). 10.1007/s11098-015-0553-7
24. * * Heraclitus. (2013) The Fragments of Heraclitus. trans. G.T.W. Patrick. Digireads (Kindle edition).
26. * * Higgins, E.T. & John A. Bargh, and Wendy Lombardi. (1985). “Nature of Priming Effects on Categorization.” Journal of Experimental Psychology: Learning, Memory, and Cognition. 11(1).
27. * * Hlobil, U. (2014) “Against Boghossian, Wright and Broome on inference.” Philosophical Studies. 167(2). doi 10.1007/s11098-013-0104-z
28. * * Horwich, P. (1998). Truth. Oxford:Oxford.
29. * * Kahneman, D. (2013). Thinking, Fast and Slow. New York:Anchor Books.
30. * * Keefner, A. (2016). “Corvids infer the mental states of conspecifics.” Biology & Philosophy. 31(2).
31. * * Koziolek, N. (2017) “Inferring as a Way of Knowing.” Synthese. Pre-print.
32. * * Lackey, J. (2007). “Norms of assertion.” N oûs, 41:594–626.
33. * * Machery, E. (2009) Doing Without Concepts. Oxford:Oxford University Press.
34. * * Margolis, E. & Stephen Laurence. (1999) Concepts: Core Readings. Cambridge:MIT Press.
35. * * McHugh, C. & Jonathan Way. (2016) “Against the Taking Condition.” Philosophical Issues. 26. 36. * * Millgram, E. (2009). Hard truths. Malden:John Wiley.
37. * * Millikan, R.G. (1989). “Biosemantics.” Journal of Philosophy. 86.
38. * * Moore, G.E. (1962). Commonplace book 1919-1953. London:Routledge.
39. * * Nagel, J. (2012). “Intuitions and Experiments:A Defense of Case-Method in Epistemology.” Philosophy and Phenomenological Research. 85(3).
40. * * Origgi, G. (2004). “Is Trust an Epistemological Notion?” Episteme. 1(1).
41. * * Prior, A.N. (1960). “The Runabout Inference-Ticket.” A nalysis. 21(2).
42. * * Quine, W.V.O. (1976). “The Ways of Paradox.” in The Ways of Paradox and Other Essays. Cambridge:Harvard.
43. * * —- (1981). “What Price Bivalence?” in Theories and Things. Cambridge:Harvard.
44. * * Raffman, D. (2014) Unruly words. Oxford:Oxford University Press.
45. * * Russell, G. (2017). “An Introduction to Logical Nihilism” in Logic, Methodology and Philosophy of Science – Proceedings of the 15th International Congress. Hannes Leitgeb, Ilkka Niiniluoto, Päivi Seppälä & Elliott Sober (eds). College Publications.
46. * * Saul, J. (2017) “Are generics especially pernicious?” Inquiry. DOI: 10.1080/0020174X.2017.1285995
47. * * Shoemaker, S. (1994). “Moore’s Paradox and Self-Knowledge.” Philosophical Studies. 77.
48. * * Wedgwood, R. (2011) “Justified inference.” Synthese. 189.
49. * * Williamson, T. (2000). Knowledge and its limits. Oxford:Oxford.
50. * * Wittgenstein, L. (1958). Philosophical Investigations. Oxford:Basil Blackwell.
51. * * Woods, J. (1965) “Paradoxical assertion.” Australasian Journal of Philosophy. 43(1). doi 10.1080/00048406512341011
52. * * Wright, C. (2014). “Comment on Paul Boghossian, ‘‘The nature of inference’’.” Philosophical Studies, 169(1). doi:10.1007/s11098-012-9892-9.
PS: Although I believe that the topic of this paper is of intrinsic interest, I considered it worth laboring over in some detail for a couple of instrumental reasons. It is my view, generally, that the value of ‘integrity’ in the philosophy of law, as proposed by Ronald Dworkin as a political value, is in fact an epistemic value that applies to inference during the process of categorization. Hence, it is my view that an attenuated version of his ‘integrity theory’ probably applies much more broadly than he suspected, and to a diverse array of societal contexts. Moreover, I think Dworkin hit on a deep insight in his interpretivist conception of rules, and this is a partial expression of articulating what I find attractive in his account. And, relatedly, I think parallels can be made between the command theory of law and the post-inferential theory of inference. When I contest the post-inferential theory, I draw from a reservoir of criticism that we ordinarily find directed at the command theory. Both, however, have fiat in common.
Moreover, it is my view that much of intellectual good faith can be found by considering the values of integrity and candor, and all but necessary for thinking about the nature of reasonable theoretical disagreements. However, that conviction is unimpressive and uninteresting unless it can be motivated by a deeper explanation of some kind. To me, cognition seems like the most plausible place to find a viable account.
That having been said, none of those considerations ought to be brought to bear in the evaluation of the argument above.
One thought on “It is our lot to reason why”