Innate games and constitutive norms

It’s absurd to say that the game of soccer is innate. Why? Because it’s silly to think that the information encoded in our genes gives expression to phylogenetic traits on minimal triggering and which track the complex set of rules that make up ‘soccer’. Similarly, it is absurd to talk about most games as innate — chess, badminton, Uno, and so on.

Indeed, you’d expect this point to apply to all games. But maybe it doesn’t. For, here’s a proposal: a game isn’t much more than a set of playable tricks. And some tricks are, plausibly, innate under some general description. Example. When my dog plays catch, the ‘catch-and-return’ instinct seems like an innate trick, because it comes too quickly and too easily to too many dogs with a similar genetic makeup. Furthermore, the trick itself is pretty much all there is to say about the rules of game.

I’m cheating a little. Granted, the particular manifestation of the game that my dog (Sammy) plays cannot be reduced to its natural components. Typically, the game he plays is best done under a richer description — “Catch the Monkeyman”, owing to the fact that his chew toy was (in better days) vaguely monkey-man-shaped. And of course it would be weird to attribute to him a monkey-man-toy-responsive trait, given that I’ve seen other dogs play a similar game of catch without the need for monkeymen. Still, if you fudge the edges of the example, it looks like catch-and-return is a case of a game that is innate for the species.

That doesn’t mean that all games are innate. Presumably, few are. What is interesting to me is that there is a predictable structure to games, as many of our games correspond to assemblages of these favorite natural tricks. Moreover, the rich description of a game probably far exceeds what you would get if you cobbled together all the natural tricks it takes to play it, in the same way that the “Monkeyman” description exceeds the catch-and-return game.

That said, if you could describe the essential or enduring structure of a game in terms of its natural tricks, you might have a stronger basis for talking about which norms are truly constitutive of the game. So, e.g., despite its name, “Catch the Monkeyman” is not really about the Monkeyman. Similarly — shifting examples to one that is more philosophically interesting — if we want to talk about truth as the constitutive norm of the game of assertion, we should be ready to talk about a truth-directed representational trick in our minds, and which provides structure to the activity.

Why can’t I will a desire?

In this quick post, I’ll try to answer the question: why can’t we always form our desires at will and on command? So, for example, why is it I can’t will myself into wanting to do exercise, or wanting to grade a stack of papers, or into wanting to apply for that job at the box factory? After some brief deductive navelgazing, I’ll suggest that it might be possible to desire on command, though only if the agent has unsophisticated beliefs about their own agency.

Following the Davidsonian tradition, I’ll suppose that reasons are belief-desire pairs, and also suppose that an intention is a reason for action with an appropriate causal role in making the action. (Maybe this is a wrong account of action in general, because folk psychology sucks, etc. But for the sake of the present argument, I’ll assume it’s good enough for clear cases, and allow that a fuller account can be presented with a richer functional/causal vocabulary.) From this model, it follows that, for me to be able to will a desire, I’ll have to desire the desire, and believe I can effect the first-order desire.

Suppose we could causally effect desires in this way as a general capacity; what would the world look like if we could? Well, it would follow that akrasia would be impossible. For any time you failed to do a thing, it would always owe to your failure to want to do the thing. And that forbearance is not a weakness of will so much as a willful rejection of a live option. This is not our world, since akrasia does exist. So we do not have that general capacity.

But why not? What’s the holdup?

Assuming the Davidsonian model, there are three potential points of failure. Either (a) desires are not effective in making desires, or (b) beliefs are not effective in making desires, (c) there’s something about the relation between beliefs and desires that is not effective in making desires.

The failure to generate desire does not issue from the fact that genuine second-order desires cannot effect first-order desires. We fall in and out of love with our enthusiasms all the time, e.g., through emotion work and gratitude. It is both possible, and routine, for us to voluntarily adjust the intensity of a desire, by considering its relation to previously existing desires. And this is a special case of being able to will a desire, just in case willing is a belief-desire pair, which we assumed it is. Sometimes, a second-order desire is indeed sufficient to sustain a first-order one.

The failure to generate desire is, at least on first glance, not a function of a problem in causal effectiveness of belief-desire pairings. After all, by hypothesis, all intentions involve such pairings, and so must have the potential for action-success. There is, perhaps, something special about the case of willing a desire that prevents it from being willed. But it is unclear to me how I could better understand those limitations just by looking at the nature of reasons. If it were obvious, the question never would have come up in the first place.

So, since second-order desires sometimes do compel first-order desires, the obstacle must be found in the causal effectiveness of beliefs, either when treated as standalone mental happenings, or in their contribution to reasons.

What distinguishes belief from other representations with mind-to-world direction of fit is that it is truth-apt. And non-truth-apt representations of the world — e.g., intuitions — really do have causal efficacy in making desires. Hence, the difficulty that many of us actual humans seem to have in distinguishing the cognitive contents of intuitions from those of gut feelings.

So, one might think that the problem is that belief is oriented towards truth, as truth-directedness is not fit for ordering our sentiments. Why might that be? Well, I think truth has at least two features: it (a) indicates that the sentence has a referent, and (b) its claims are ostensibly built to last. However, (a) I see no issue in treating desires as referents. We think about desires as objects of propositions all the time. (b) So maybe the problem is in the presumption of standing. i.e., if we were able to will a desire, then it would presume that we also have a belief in our ability to effect a desire, and so, a judgment that our ability to effect a desire will be built to last. But if we ever suffer from akrasia, and remember it, then those judgments are unlikely to survive much scrutiny.

Suppose that account is correct. The upshot is that it might be generally possible to will a desire, but one has to have extremely unsophisticated (or deluded) beliefs about one’s own self-mastery to do it.

Dialectic and rational arguments in philosophy

Socratic dialogue is modeled on dialectic, and for that reason it is a central part of Western philosophy. In the previous post, I pointed out that, historically speaking, dialectic contrasts with three other argumentative styles — rhetoric, scholasticism, and mathematics. Unlike rhetoric, dialectic is not about persuasion for its own sake, but the pursuit of stable conclusions (as we saw in selections from both Gorgias and Phaedo). Unlike scholasticism, the dialectician attempts to resolve disputes through engagement (i.e., the method of disputation), not through deference to written authority in the form of scripture. And unlike mathematics, dialectic investigates the worthiness of its premises (i.e., what I called the ‘collapse-and-consequence’ model), instead of treating premises as axiomatic.

Last time, I suggested that these three historical contrasts help to hone in on a particular feature of concept of dialectic, which is that dialectic is a form of second-order rational persuasion. I suggested that the constitutive point of dialectic is to convince people that some passages of thought or speech are rational, and to resolve disputes in that minimal sense of creating directed change towards a state of intellectual common ground. I called this ‘persuasionism’. A vital part of the persuasionist thesis is the idea that dialectical arguments occur in the context where they are directed towards change in mental state (what Gilbert Harman calls a “change in view”), leading to resolution of dissonance. I argued that the persuasionist theory is superior to the purity thesis, i.e., someone who thinks the collapse-and-consequence model is sufficient to characterize dialectic, and that no reference to effective perspective change is strictly necessary.

The persuasionist thesis says that dialectic involves a directed change in view accomplished by means of demonstrating the rational defensibility of a passage of thoughts in light of potential challenges. One might wonder whether demonstrating defensibility of some train of thought actually counts as “persuasion”. But a moment’s reflection shows it clearly does. As a matter of definition, to persuade just is to cause someone to believe or act in some directed fashion that they did not before. When you subject a set of reasons to potential objections, you leave the set of reasons altered — stronger, if all goes well for the defender of those reasons. This means that in the process of demonstrating defensibility, you have produced a change in view about the status of the arguments as being more reasonable than they seemed at the outset, all other things equal. And my suggestion is that this sort of directed change is not an accident or an irrelevant side-effect, but rather is a part of the dialectician’s stance of attempting to direct a change in view during the course of presentation of argument. Notably, though, it is an attempt at mutual persuasion between defender and opponent; that is to say, it is as a joint enterprise with reciprocal expectations. Hence, when the dialectician fails to persuade their good-faith interlocutor of the rational qualities of their passage of thought, they thereby gain some reason to regard those passages of thought as irrational under some description.

In the rest of this post, I provide reasons to think that persuasionism makes the most sense of dialectic in philosophy. First, I’ll make a brief remark on the consequences of persuasionism on meta-philosophy. I suggest, briefly, that is persuasionism is conducive to productive philosophy. (Indeed, I think it is even more conducive than the purist’s alternative, which I think is worse than sophistry; but I will not argue this point in this post.) Second, I will consider some attempted refutations, based on the idea that I am excluding some kinds of argument as examples of dialectic.

1. On meta-philosophy. When I say that dialectic is not just an autodidactic exercise of getting ideas clear in isolation — of studying logical implications and entailments, or (Harman again) “what follows from what” — my emphasis is on the word “just“. Dialectic involves the study of such entailments, but is not reducible to that study. I offer two reasons. First, as we have seen in the previous post, Socrates himself thought he was attempting rational persuasion. Indeed, one of the characteristic tropes of Socratic argument is his willingness to throw the whole game away, if only a good answer can be given to a master question (which he then shows cannot be done).

But second, even in a parallel world where our Hellenistic heroes thought they were just making ideas clear independently of their audience’s convictions, it is still a fact that people can do a lot of things with all sorts of side-effects, and some of those side-effects might actually be the thing that makes the activity essentially worth doing. Sometimes, a practice has a function, and that function occurs independently of the ways the practice is conceived; it, instead, has to be recovered by examination of intuitively valenced presuppositions. And that fact makes it possible to engage critically with the tropes in Socratic dialogues, to separate the stuff Socrates thought he was doing well from the stuff that he actually was doing well. Which is just to say that contemporary critical thinkers could probably do without Socrates’s leading questions, for example, or Plato’s noble lies, even if for whatever reason Plato and Socrates in our parallel world had decided theseĀ  ideas were essential parts of their whole philosophical package. Revisionism is the price we sometimes pay for rational reconstruction.

2. On excluded cases. Most of this post derives from a spat I had with the author over at Siris blog, who seems to be a purity theorist. In our exchange, he argued that the persuasionist view of dialectic excludes a few cases of rational argumentation. 1) It seems to exclude cases where we apply the collapse-and-consequence model through habit. 2) It seems to exclude practice arguments, e.g., as when the student makes use of natural deduction. 3) It seems to exclude cases involving a stimulating exchange of reasons for exploratory purposes. But these examples are not on equal footing. So, my view is that (1) is not an argument at all, (2) is rational argument but not dialectic, and (3) is an unobvious kind of dialectic.

Habitual processing. I reject the idea that arguments are, or can be, merely habitual passages of thought. For a person to suggest that habitual passages of thought are not directed at change in view, is for that person to fail to attend to the internal point of view, and in particular to neglect the intuitive force of argumentation. Intuitively, there seems to be a difference between mere regularities and rules, and rational arguments are about rules, so regular habits of thought are not themselves arguments.

The point can be made in part by appealing to the philosopher’s ego. If merely habitual orderings of thought counted as philosophical arguments — if it were even possible to follow the quick turnabouts in collapse-and-consequence model into habits — then it would turn philosophy into something even worse than sophistry. Indeed, it would collapse the study of rational argument into the study of the psychology of reliable heuristics, or the study of computational processing. It is a rare philosopher who is eager to make themselves Turing-incompatible in this way.

Perhaps the purity theorist would consider it a strength of their view that they think they can rationally argue as a function of personal habits. And, indeed, much of logic feels like habitual or schematic, once it is mastered. And if they could get away with that, then to be sure, “persuasion” would drop out of the analysis. But the only *rational* way you can get away with the habitualist’s conviction is by finding some independent means of calibrating your passages of thought by placing them into an orderly rule-like quasi-sentential (propositional or imperative) structure. And it is difficult to see how habits or mere regularities could have that rule-like character — a man who ā€œarguesā€ with himself habitually is not engaged in inference, hence not arguing rationally at all. In that sense, the approach from habits is going to founder on the question, “What makes this rational?”, and one does not even have to be a persuasionist to suspect that it is a mistake. But even if we come up with an adequate causal account of rules (as, indeed, we might), there is the remaining requirement of needing to account for the ‘following‘ part of ‘rule-following’, which is an intentional activity that seemingly requires both identification of rules and calibration of them.

Practice arguments. A different argument proceeds by observing that, when we are doing proofs in natural deduction, we aren’t trying to persuade anyone of anything. From premises, we are given the task of showing their consistency. Sure enough, this does not look like rational argument.

In this case, I think it would be useful to remember that philosophical argument is not all dialectic. The geometric or analytical method, of deriving consequences from axioms, is one method in philosophy, though it is not a Socratic method. So, one might insist (correctly) that the geometric method has got all the bells and whistles of a rational methodology, and that this is being ignored in a conversation about dialectic. And then one might notice that practice arguments have the form of analytical arguments.

This argument has my blessing, though it is not of first importance in a conversation which is meant to be about the merits of rational argument insofar as it has been conceived of through the Socratic approach. It also reminds us that we ought to notice that a presumptive dichotomy, between dialectic and rhetoric, is a false one. The mathematician is not just doing rhetoric.

Bullshit sessions. The author of Siris also asserts, plausibly, that the persuasionist view of argument seems to make no sense of ‘stimulating thought’ exchanges, where the aim is apparently to open oneself to exchange, not to create a directed change. I agree these contexts are not obvious attempts at rational persuasion; it is easier to say that they are attempts to explore the space of reasons. In bullshit sessions, for example, rational people can take on points of view “for the sake of argument”.

But appearances are deceiving, because the difference has got to do with whether or not the attempts at change are built to last. I submit that in these cases, participants are attempting to persuade others into the view that it is rational to regard some perspective as appropriate in a context, not to persuade people that it is rational to hold the positions are true. The attempt is still to show that, in a contest of reasons, one comes out stronger, even if the contest is local, and comes to an end when the sun goes down. So they still fit with the persuasionist model of dialectic.

Modeling the concept of genocide

This month I’ve talked a little about conceptual spaces, and a little about genocide, and a little about law and non-classical categories. Now I would like to tie the strings together by showing what use computer models might have in relation to those subjects.

This past week I have been graphing the concept of genocide for the sake of demonstrating the potential appeal of the conceptual spaces paradigm. The hope is to find some way of capturing the information that a person processes which underlie their judgments about how to categorize episodes of genocide, in the absence of classical category structures imposed by definitional fiat. From the jurist’s point of view, looking at concepts in this way is legally obtuse, and hence of at best indirect importance to a court — which, of course it is. On the other hand, if the conceptual spaces paradigm is a worthwhile attempt to describe psychological processing, it is of great importance to a people. And since virtually everybody in the history of the philosophy of law believes that law is only valid law when promulgated, and promulgation presupposes shared conceptual inventory… well, you get the idea.

In the previous post I took a look at Paul Boghossian’s (2011) critique of the concept of genocide. (I could have chosen any number of scholarly critiques of genocide to focus on — e.g., R. J. Rummel — but settled on Boghossian’s paper for the prosaic reason that his paper is available for free on academia.edu.) Boghossian offered a few cases which seemed to intuitively challenge the classical conception — the case of targeted warfare (Dresden), an imagined case of gendercide, and Stalin’s dekulakization. I take it that his remarks are not proposed in an effort to undermine the UN’s 1948 Convention on the Prevention and Punishment of the Crime of Genocide, but rather to perhaps complicate and enrich it by making its intrinsic motivations more defensible.

Fig.1.
Fig.1. Venn. Classic boundary structure.

The classical concept of genocide looks something like the Venn Diagram we see to the right. Put succinctly, genocide is the use of atrocious means, against vital populations, with the intrinsic end of destroying at least some of that population (i.e., destruction of the group is an end-in-itself). These strict criteria tell us what the international court would have to say about Boghossian’s cases: that dekulakization and gendercide don’t count (economic classes and genders are not protected populations). Meanwhile, the facts about Dresden and Nagasaki are borderline cases, depending on the intentions of the Allies in charge of the war. But a reasonable person might wonder whether the underlying legislation is a result of political expediency and moral complicity as opposed to the strict and merciless requirements of justice.

To get a better sense of the psychological lay of the land, I decided to create a model of the conceptual space of genocide. The really wonky methods I used are discussed in the next section. For now, I’ll just discuss a few interesting implications from what I found.

Fig.2.
Fig.2. Gephi, which is a networked concept. “Distances” are approximated by color groupings.

One potentially interesting result that I keep running into, at least for the latest iteration of the model, is that American slavery occupies a space relatively close to the Holocaust. (see right) This happens even though no direct analytical links force the two together, and despite the fact that this was not an effect I was expecting. Compare that to the classic categorization pictured in the Venn diagram (above), where slavery is treated as a definite non-case.

This might be worth noting, I think, because if the spatial analysis had any probative worth, then it might be an interesting part of a roundabout explanation of America’s long-standing hesitation to intervene in episodes of genocide worldwide, discussed by Samantha Power. You can tell a story where the American civil war places them on awkward footing with the idea of genocide, because they share the same conceptual space, though are not technically part of the same legal category.

Fig.3.
Fig.3. VOSViewer. Genocide as a spatial concept.

But I should place emphasis on ‘if-then’. The use of the model is questionable, and depends on what you think of the methods behind the model. If you are interested in those, keep reading. Still, even if we think the model has little probative value, I would be satisfied to see more conversation in philosophy about the usefulness of conceptual spaces when thinking about how concepts and categories are received.

Continue reading “Modeling the concept of genocide”

Notes on the concept of genocide

IMG_2293

Remembrance Day has come and gone. I spent it in an Armory, listening to my parents’ choir, singing a rendition of Flander’s Fields and Handel and so on. All the hits, basically.

Samantha Power’s “A Problem from Hell” (2002) is a history of the concept of genocide. She argues that the American government’s default attitude to genocide is ambivalence.* Even if you disagree with her assessment of American foreign policy, it is also a lucid and useful volume just for the sake of understanding the imperfect legacy of the idea.

In international law, genocide is any act which involves (a) use of at least some atrocious means, (b) against protected groups as such, with (c) the intent to eliminate at least part of those groups. The atrocities in question include: killing, serious bodily or mental harm, deliberately undermining conditions of life (e.g., ghettoization), forced sterilization, and forcible transfer of children. The protected groups are “national, ethnical, racial, or religious”, and to target these groups ‘as such’ is to treat their destruction as a worthy end in itself and not just a means to a further end. Notably, this definition applies even when the aggressor is the ruler or sovereign over the targeted peoples, and it applies during wartime.

In this conceptual space the Holocaust of the Second World War is the prototype of genocide, since that episode involved all of the atrocious means (killing, torture, sterilization, etc.) and was perpetrated against the protected groups as such. During the course of Power’s recounting, we learn of other definite exemplars of genocide in the 20th century — the Armenian genocide by the Turks, the Khmer Rouge’s assault on urban centers in Cambodia, Iraq’s use of chemical weapons against the Kurds, the massacre of Muslims in Bosnia, the Tutsis in Rwanda, and so on.

Though Power does not discuss this, it is noteworthy that the Canadian residential schools program was genocide. During that decades-long institutional crime against humanity, persons of Indigenous descent were sterilized and their children were forcibly relocated, notably during the period known as the “Sixties Scoop“. It has been alleged that episodes related to this event occurred up to 2017. To be sure, it is not be a prototype in the region of conceptual space of “genocide”, but it is a definite case.

**

For some Canadians this may be too much to take in. Nobody wants to be complicit in genocide, so denial of the facts is one strategy. However, there might be some problems with our grasp of the concept itself, which are getting in the way of getting accepted. That is, there might be features of the definition that hard to deploy in cognition, because our usage fails to meet the virtues of a well-behaved categorization.

So, for instance. Some time ago, Paul Boghossian suggested that the concept of genocide was irremediably defective. His arguments are reasonable. But is he right to suggest that the concept of genocide is especially hard to parse?

I must confess that not all of his arguments struck me as decisive. (1) So, for instance, the law requires actions that are intended to eliminate at least part of a protected group, and this “in part” clause is vague to the point of ambiguity. Boghossian argues that this is a major defect. But: for one thing, as many philosophers of law will tell you, that is one of the ambiguities that is strategic to lawmaking, as it affords a legal culture the opportunity to deliberate on the moral, political, and common-sense features of a non-obvious question in the mereology of social ontology. (2) For another thing, he argues that genocide is meant to be a distinctive injustice as a matter of analytical fact. But we can reasonably question whether genocide is distinctively worse than cases of mass killings without being incoherent, which (for classical conceptual analysts) should be sufficient reason to dismiss the need to establish that genocide is a distinct moral wrong. I think it is enough to establish that it is a wrong somewhere at the top of the heap of moral wrongs.

That said, many of Boghossian’s points are worth consideration. He identified several cases that are ostensibly excluded, but which ought to be included:

  • Stalin’s dekulakization was directed towards an economic class of ostensibly well-off peasants, the Kulaks, that resulted in millions of deaths by way of forced redistribution of essential goods necessary for life (a). This apparently does not count as genocide because “economic class” is not a protected group, (b). (For the sake of completeness, we might also include questions about whether or not it is targeting “as such”, as opposed to instrumentally targeting for the sake of collectivization.)
  • He wonders whether or not the intention of exterminating part of a gender would count. (e.g., we might cite sex selection and infanticide in the developing world.)

He also considered some cases that ought to be excluded, but are mistakenly included:

  • Egregious wartime episodes like the firebombing of Dresden or the bombing of Nagasaki, targeted nationalities as such, using atrocious means. But (Boghossian suggests) this is an awkward fit, since the episodes occurred during wartime. For him, these are not obvious cases of genocide, since it is at least plausible to say that they were targeted as a means to an end, the end being to end the war.

Ordinarily, this would be the place where I would argue for one or another categorization of the concept of genocide, such that these apparent exceptions are finessed into a rendering of a coherent whole, either decisively rejected as cases of genocide or decisively included.

But I will not do that. What I would prefer to do is examine the concept of genocide as a perspicuous region in conceptual space, following the methods in the previous post. Perhaps that will have to wait for a different installment.

**

*Her thesis has to be slightly complicated once you factor in G.W. Bush’s neo-conservative moralism when he argued in favor of the second invasion of Iraq in 2002 — but only slightly. History shows that that policy decision was driven by other factors — as I experience flashbacks to Condeleeza Rice’s “smoking gun mushroom cloud”, Colin Powell’s credibility-deflating testimony before the UN, and the bewilderment of the intelligence community reflected in the Downing Street Memo, and John Bolton’s ongoing impulse-control problems. Still, even if you grant that neo-conservatism certainly sold itself as a moralistic doctrine, it appears as a historical blip. And there is probably no surer evidence of this fact than Samantha Power herself was ousted from her position as representative to the UN during the crypto-isolationistic Trump administration.

Potted summary: “Reasoning About Categories in Conceptual Spaces”

What follows is a short summary of the main elements of a paper written by Peter Gardenfors (Lund) & Mary-Anne Williams (Newcastle) in their paper from 2001, “Reasoning About Categories in Conceptual Spaces”. It contains a way of thinking about concepts and categorization that seems quite lovely to me, as it captures something about the meat and heft of discussions of cognition, ontology, and lexical semantics by deploying a stock of spatial metaphors that is accessible to most of us. I confess that I cannot be sure I have understood the paper in its entirety (and if I have not, feel free to comment below). But I do think the strategy proposed in their paper deserves wider consideration in philosophy. So what follows is my attempt to capture the essential first four sections of the paper in Tractarian form.

  1. An object is a function of the set of all its qualities. (For example, a song is composed of a set of notes.)
    1. Every quality occurs in some domain(s) of evaluation. (e.g., each tone has a pitch, frequency, etc.)
    2. A conceptual space is a set of evaluative domains or metrics. (So, the conceptual space around a particular song is the set of metrics used to gauge its qualities: pitch, frequency, etc.)
    3. Just like ordinary space, a conceptual space contains points and regions. Hence, an object is a point in conceptual space.
    4. We treat some objects as prototypes with respect to the part of conceptual space they are in (e.g., the prototype of a bird is a robin.)
      1. Those objects which have been previously encountered (e.g., in inductive fashion), and their location registered, are exemplars.
  2. A concept is a region in conceptual space.
    1. Some of those regions are relatively amorphous, reflecting the fact that some concepts are not reliable and relevant in the judgments we make. (e.g., a Borgesian concept.)
    2. Categorization identifies regions of conceptual space with a structure. e.g., in our folk taxonomy, we have super-ordinate, basic, and sub-ordinate categories.
      • Super-ordinate categories are abstract (fewer immediate subcategories, high generality, e.g., ‘furniture’); basic categories are common-sense categories (lots of immediate subcategories, medium generality; e.g., ‘chairs’); and sub-ordinate categories are detail-oriented (few immediate subcategories, low generality; e.g., ‘Ikea-bought chaise-longue’).
    3. The boundaries of a category are chosen or “built”, depending on the structure that is identified with the concept in the context of the task. They can be classical (“discrete”) boundaries, or graded, or otherwise, depending on the demands of content, context, and choice.
    4. The structure of a conceptual space is determined by the similarity relations (“distances“) between points (or regions) in that space.
    5. One (but only one) useful way of measuring distance in a conceptual space is figuring out the distance between cases and prototypes, which are especially salient points in conceptual space.
      • Every prototype has a zone of influence. The size of that zone is determined by any number of different kinds of considerations.
  3. There are at least three kinds of structure: connectedness, projectability (“star-shapedness”), and perspicuity (“convexity”).
    1. A conceptual region is connected so long as it is not the disjoint union of two non-empty closed sets. By inference, then, a conceptual region is disconnected so long as its constituents each contain a single cluster, the sets intersect, but the intersection is empty. For example, the conceptual region that covers “the considered opinions of Margaret Wente” is disconnected, since the intersection of those sets is empty.
    2. Projectability (they call it ‘star-shapedness’) means that, for a particular given case, and all points in a conceptual space, the distance between all points and the case do not exit the space.
      1. For example, consider the concept of “classic works of literature”, and let “For Whom the Bell Tolls” be a prototype; and reflect on the aesthetic qualities and metrics that would make it so. Now compare that concept and case to “Naked Lunch”, which is a classic work of literature which asks to be read in terms of exogenous criteria that have little bearing on what counts as a classic work of literature. There is no straight line you can draw in conceptual space between “For Whom the Bell Tolls” and “Naked Lunch” without wandering into alien, interzone territory. For the purposes of this illustration, “For Whom…” is not projectable.
    3. Perspicuity (or contiguity; they call it ‘convexity’) means all points of a conceptual space are projectable with each other.
      • By analogy, the geography of the United States is not perspicuous, because there is no location in the continental United States that is projectable (given that Puerto Rico, Hawaii, and Alaska all cross spaces that are not America).
      • According to the authors, the so-called “natural kinds” of the philosopher seem to correspond to perspicuous categories. Presumably, sub-ordinate folk categories are more likely to count as perspicuous than basic or super-ordinate ones.
  4. One mechanism for categorization is tessellation.
    1. Tessellation occurs according to the following rule: every point in the conceptual space is associated with its nearest prototype.
    2. Abstract categorizations tessellate over whole regions, not just points in a region. (Presumably, this accounts for the structure of super-ordinate categorizations.)
      1. There are at least two different ways of measuring distances between whole regions: additively weighted distance and power distance. Consider, for example, the question: “What is the distance between Buffalo and Toronto?”, and consider, “What counts as ‘Toronto’?”
        1. For non-Ontarian readers: the city of Toronto is also considered a “megacity”, which contains a number of outlying cities. Downtown Toronto, or Old Toronto, is the prototype of what counts as ‘Toronto’.
        2. Roughly speaking, an additively weighted distance is the distance between a case and the outer bounds of the prototype’s zone of influence. 2
          • So, the additively weighted distance between Buffalo and Toronto is calculated between Buffalo and the furthest outer limit of the megacity of Toronto, e.g., Mississauga, Burlington, etc.
          • The authors hold that additively weighted distances are useful in modeling the growth of concepts, given an analogy to the ways that these calculations are made in biology with respect to the growth of cells.
          • In a manner of speaking, you might think of this as the “technically correct” (albeit, expansive) distance to Toronto.
        3. Power distance measures the distance between a case and the nearest prototype, weighted by the prototype’s relative zone of influence.
          • So, the power distance between Buffalo and Toronto is a function of the distance between between Buffalo, the old city of Toronto, and the outermost limit of the megacity of Toronto. Presumably, in this context, it would mean that one could not say they are ‘in Toronto’ until they reached somewhere around Oakville.
          • This is especially useful when the very idea of what counts as ‘Toronto’ is indeterminate, since it involves weighting multiple factors and points and triangulating the differences between them. Presumably, the power distance is especially useful in constructing basic level categories in our folk taxonomy.
          • In a manner of speaking, you might think of this as the “substantially correct” distance to Toronto.
        4. So, to return to our example: the additively weighted distance from Buffalo to Toronto is relatively shorter than when we look at the power distance, depending on our categorization of the concept of ‘Toronto’.
    3. For those of you who don’t want to go to Toronto, similar reasoning applies when dealing with concepts and categorization.

Non-classical conceptual analysis in law and cognition

Some time ago I discovered a distaste for classical conceptual analysis, with its talk of individually-necessary-and-jointly-sufficient conditions for concepts. I can’t quite remember when it began — probably it was first triggered when reading Lakoff’s popular (and, in certain circles of analytic philosophy, despised) Women, Fire, and Dangerous Things; solidified in reading Croft and Cruse’s readable Cognitive Semantics; edified in my conversations with neuroscientist/philosopher Chris Eliasmith at Waterloo; and matured when reading Elijah Millgram’s brilliantly written Hard Truths. In the most interesting parts of the cognitive science literature, concepts do not play an especially crucial role in our mental life (assuming they exist at all).

Does that mean that our classic conception of philosophy (of doing conceptual analysis) is doomed? Putting aside meta-philosophical disagreements over method (e.g., x-phi and the armchair), the upshot is “not necessarily”. The only thing you really need to understand about the cognitive scientist’s enlarged sense of analysis is that it redirects the emphasis we used to place on concepts, and asks us to place renewed weight on the idea of dynamic categorization. With this slight substitution taken on board, most proposition-obsessed philosophers can generally continue as they have.

Here is a quick example. So, classical “concepts” which ostensibly possess strict boundaries — e.g., the concept of number — are treated as special cases which we decide to interpret or construe in a particular sort of way in accordance with the demands of the task. For example, the concept of “1” can be interpreted as a rational number or as a natural one, as its boundaries are determined by the evaluative criteria relevant to the cognitive task. To be sure, determining the relevant criterion for a task is a nigh-trivial exercise in the context of arithmetic, because we usually enter into those contexts knowing perfectly well what kind of task we’re up to, so the point in that context might be too subtle to be appreciable on first glance. But the point can be retained well enough by returning to the question, “What is the boundaries of ‘1’?” The naked concept does not tell us until we categorize it in light of the task, i.e., by establishing that we are considering it as a rational or a natural.

Indeed, the multiple categorizability of concepts is familiar to philosophers, as it captures the fact that we seem to have multiple, plausible interpretations of concepts in the form of definitions, which are resolved through gussied-up Socratic argument. Hence, people argue about the meaning of “knowledge” by motivating their preferred evaluative criteria, like truth, justification, belief, reliability, utility, and so on. The concept of knowledge involves all the criteria (in some amorphous sense to be described in another post), while the categorization of the concept is more definite in its intensional and extensional attributes, i.e., its definition and denotation.

The nice thing about this enlarged picture of concepts and category analysis is that seems to let us do everything we want when we do philosophy. On the one hand, it is descriptively adequate, as it covers a wider range of natural language concepts than the classical model, and hence appeals to our sympathies for the later Wittgenstein. On the other hand, it still accommodates classical categorizations, and so does not throw out the baby with the bathwater, so not really getting in the way of Frege or Russell. And it does all that while still permitting normative conceptual analysis, in the form of ameliorative explications of concepts, where our task is to justify our choices of evaluative criteria, hence doing justice to the long productive journey between Carnap and Kuhn described in Michael Friedman’s Dynamics of Reason.

While that is all nice, I didn’t really start to feel confident about the productivity of this cognitivist perspective on concepts until I started reading philosophy of law. One of the joys of reading work in the common-law tradition is that you find that there is a broad understanding that conceptual analysis is a matter of interpretation under some description. Indeed, the role of interpretation to law is a foundational point in Ronald Dworkin, which he used it to great rhetorical effect in Law’s Empire. But you can find it also at the margins of HLA Hart’s The Concept of Law, as Hart treats outlying cases of legal systems (e.g., international law during the 1950’s) as open to being interpreted as legal systems, and does not dismiss them as definitely being near-miss cases of law. Here, we find writers who know how to do philosophy clearly, usefully, and (for the most part) unpretentiously. The best of them understand the open texture of concepts, but do not see this as reason to abandon logical and scholarly rigor. Instead, it leads them to ask further questions about what counts as rigor in light of the cognitive and jurisprudential tasks set for them. There is a lot to admire about that.

Divergent borderline cases

I’ve been thinking about a previous post, on borderline law, and thought maybe it would be worth elaborating a little on the remarks there, just in case they were too perfunctory.

Almost every core theoretical disagreement in philosophy of law (and, probably, philosophy) comes down to arguments over something kind of like focal meaning. (“A Gettier case either is, or is not, a case of knowledge qua knowledge; let’s fight about it”, etc.) Or, if the idea of focal meaning is too metaphysics-y — because, Aristotle thought they had to do with natural kinds, and, (mumble mumble, digression digression) — we can instead say that theoretical disagreements about major philosophical concepts are about graded categories and exemplars.

Graded conceptual analysis has at least two benefits. First, it captures the sense in which it is possible for two people to actually disagree about the same concept without radically misunderstanding each other. That is, it disarms Dworkin’s semantic sting. Second, relatedly, it encourages a kind of modesty in one’s critical ambitions, as borderline cases are permitted in discourse but regarded as especially vulnerable to subjective interpretation.

But there are some downsides to doing graded conceptual analysis. For one thing, a lot of the evaluative-critical import gets lost. So, e.g., when you say, “Kafkan law is a borderline case of law”, the implied criticism pales in comparison to a claim like “Kafkan law is not actually law”. Disputes over the former claim, pro vs. con, look to be trivial. Moreover, we cannot rescue that critical import by definitely asserting that some token case is definitely a near-miss, or a pseudo-legal system. For a borderline case is one that is, by its nature, either a near-miss or a peripheral case, and we can’t tell which. If we say, “Kafkan law is a near-miss case of law”, we abandon graded categorization, along with all the salutary features of that sort of conceptual analysis.

The way of bringing the critical sting back into talk about graded concepts requires us to talk about their directionality. Kafkan law is not just a borderline case — it is a borderline case that is (in some suitable sense) drifting away from the central cases of law considered as tacit or explicit verdicts of institutional sources. Put in this way, we remain neutral on the question of whether or not para-legal systems, considered as a class, actually have (or can be forseen to continue to have) the status of being actually legal systems. The worry is localized on the token cases that are at risk of drifting beyond para-legality into pseudo-legality — they may or may not actually be legal systems now, but they are destined to lose that status of law soon enough.

And a reasonable person might worry that many contemporary political-legal systems are headed in that direction, into the twilight of law (to borrow John Gardner’s evocative phrase). But if the argument aims to tell us what law actually is, then the weight of that argument has (apparently) got to go beyond talking about either the endurance or subversion of secondary rules of the legal system. Or, at any rate, it has got to go farther than to say that any social system which has defective rules of recognition encoded in the practices of the core of the judiciary.

(So, e.g., a disquieting feature of America’s drift from the central cases of legality, it seems to me, is the loss of a sense of what Jules Coleman called identification rules: it seems to me that the loss of both identification rules and secondary rules would be sufficient to make a legal system a divergent case. Though I shall have to leave an argument for that for another post.)