Modeling the concept of genocide

This month I’ve talked a little about conceptual spaces, and a little about genocide, and a little about law and non-classical categories. Now I would like to tie the strings together by showing what use computer models might have in relation to those subjects.

This past week I have been graphing the concept of genocide for the sake of demonstrating the potential appeal of the conceptual spaces paradigm. The hope is to find some way of capturing the information that a person processes which underlie their judgments about how to categorize episodes of genocide, in the absence of classical category structures imposed by definitional fiat. From the jurist’s point of view, looking at concepts in this way is legally obtuse, and hence of at best indirect importance to a court — which, of course it is. On the other hand, if the conceptual spaces paradigm is a worthwhile attempt to describe psychological processing, it is of great importance to a people. And since virtually everybody in the history of the philosophy of law believes that law is only valid law when promulgated, and promulgation presupposes shared conceptual inventory… well, you get the idea.

In the previous post I took a look at Paul Boghossian’s (2011) critique of the concept of genocide. (I could have chosen any number of scholarly critiques of genocide to focus on — e.g., R. J. Rummel — but settled on Boghossian’s paper for the prosaic reason that his paper is available for free on academia.edu.) Boghossian offered a few cases which seemed to intuitively challenge the classical conception — the case of targeted warfare (Dresden), an imagined case of gendercide, and Stalin’s dekulakization. I take it that his remarks are not proposed in an effort to undermine the UN’s 1948 Convention on the Prevention and Punishment of the Crime of Genocide, but rather to perhaps complicate and enrich it by making its intrinsic motivations more defensible.

Fig.1.
Fig.1. Venn. Classic boundary structure.

The classical concept of genocide looks something like the Venn Diagram we see to the right. Put succinctly, genocide is the use of atrocious means, against vital populations, with the intrinsic end of destroying at least some of that population (i.e., destruction of the group is an end-in-itself). These strict criteria tell us what the international court would have to say about Boghossian’s cases: that dekulakization and gendercide don’t count (economic classes and genders are not protected populations). Meanwhile, the facts about Dresden and Nagasaki are borderline cases, depending on the intentions of the Allies in charge of the war. But a reasonable person might wonder whether the underlying legislation is a result of political expediency and moral complicity as opposed to the strict and merciless requirements of justice.

To get a better sense of the psychological lay of the land, I decided to create a model of the conceptual space of genocide. The really wonky methods I used are discussed in the next section. For now, I’ll just discuss a few interesting implications from what I found.

Fig.2.
Fig.2. Gephi, which is a networked concept. “Distances” are approximated by color groupings.

One potentially interesting result that I keep running into, at least for the latest iteration of the model, is that American slavery occupies a space relatively close to the Holocaust. (see right) This happens even though no direct analytical links force the two together, and despite the fact that this was not an effect I was expecting. Compare that to the classic categorization pictured in the Venn diagram (above), where slavery is treated as a definite non-case.

This might be worth noting, I think, because if the spatial analysis had any probative worth, then it might be an interesting part of a roundabout explanation of America’s long-standing hesitation to intervene in episodes of genocide worldwide, discussed by Samantha Power. You can tell a story where the American civil war places them on awkward footing with the idea of genocide, because they share the same conceptual space, though are not technically part of the same legal category.

Fig.3.
Fig.3. VOSViewer. Genocide as a spatial concept.

But I should place emphasis on ‘if-then’. The use of the model is questionable, and depends on what you think of the methods behind the model. If you are interested in those, keep reading. Still, even if we think the model has little probative value, I would be satisfied to see more conversation in philosophy about the usefulness of conceptual spaces when thinking about how concepts and categories are received.

Continue reading “Modeling the concept of genocide”

Notes on the concept of genocide

IMG_2293

Remembrance Day has come and gone. I spent it in an Armory, listening to my parents’ choir, singing a rendition of Flander’s Fields and Handel and so on. All the hits, basically.

Samantha Power’s “A Problem from Hell” (2002) is a history of the concept of genocide. She argues that the American government’s default attitude to genocide is ambivalence.* Even if you disagree with her assessment of American foreign policy, it is also a lucid and useful volume just for the sake of understanding the imperfect legacy of the idea.

In international law, genocide is any act which involves (a) use of at least some atrocious means, (b) against protected groups as such, with (c) the intent to eliminate at least part of those groups. The atrocities in question include: killing, serious bodily or mental harm, deliberately undermining conditions of life (e.g., ghettoization), forced sterilization, and forcible transfer of children. The protected groups are “national, ethnical, racial, or religious”, and to target these groups ‘as such’ is to treat their destruction as a worthy end in itself and not just a means to a further end. Notably, this definition applies even when the aggressor is the ruler or sovereign over the targeted peoples, and it applies during wartime.

In this conceptual space the Holocaust of the Second World War is the prototype of genocide, since that episode involved all of the atrocious means (killing, torture, sterilization, etc.) and was perpetrated against the protected groups as such. During the course of Power’s recounting, we learn of other definite exemplars of genocide in the 20th century — the Armenian genocide by the Turks, the Khmer Rouge’s assault on urban centers in Cambodia, Iraq’s use of chemical weapons against the Kurds, the massacre of Muslims in Bosnia, the Tutsis in Rwanda, and so on.

Though Power does not discuss this, it is noteworthy that the Canadian residential schools program was genocide. During that decades-long institutional crime against humanity, persons of Indigenous descent were sterilized and their children were forcibly relocated, notably during the period known as the “Sixties Scoop“. It has been alleged that episodes related to this event occurred up to 2017. To be sure, it is not be a prototype in the region of conceptual space of “genocide”, but it is a definite case.

**

For some Canadians this may be too much to take in. Nobody wants to be complicit in genocide, so denial of the facts is one strategy. However, there might be some problems with our grasp of the concept itself, which are getting in the way of getting accepted. That is, there might be features of the definition that hard to deploy in cognition, because our usage fails to meet the virtues of a well-behaved categorization.

So, for instance. Some time ago, Paul Boghossian suggested that the concept of genocide was irremediably defective. His arguments are reasonable. But is he right to suggest that the concept of genocide is especially hard to parse?

I must confess that not all of his arguments struck me as decisive. (1) So, for instance, the law requires actions that are intended to eliminate at least part of a protected group, and this “in part” clause is vague to the point of ambiguity. Boghossian argues that this is a major defect. But: for one thing, as many philosophers of law will tell you, that is one of the ambiguities that is strategic to lawmaking, as it affords a legal culture the opportunity to deliberate on the moral, political, and common-sense features of a non-obvious question in the mereology of social ontology. (2) For another thing, he argues that genocide is meant to be a distinctive injustice as a matter of analytical fact. But we can reasonably question whether genocide is distinctively worse than cases of mass killings without being incoherent, which (for classical conceptual analysts) should be sufficient reason to dismiss the need to establish that genocide is a distinct moral wrong. I think it is enough to establish that it is a wrong somewhere at the top of the heap of moral wrongs.

That said, many of Boghossian’s points are worth consideration. He identified several cases that are ostensibly excluded, but which ought to be included:

  • Stalin’s dekulakization was directed towards an economic class of ostensibly well-off peasants, the Kulaks, that resulted in millions of deaths by way of forced redistribution of essential goods necessary for life (a). This apparently does not count as genocide because “economic class” is not a protected group, (b). (For the sake of completeness, we might also include questions about whether or not it is targeting “as such”, as opposed to instrumentally targeting for the sake of collectivization.)
  • He wonders whether or not the intention of exterminating part of a gender would count. (e.g., we might cite sex selection and infanticide in the developing world.)

He also considered some cases that ought to be excluded, but are mistakenly included:

  • Egregious wartime episodes like the firebombing of Dresden or the bombing of Nagasaki, targeted nationalities as such, using atrocious means. But (Boghossian suggests) this is an awkward fit, since the episodes occurred during wartime. For him, these are not obvious cases of genocide, since it is at least plausible to say that they were targeted as a means to an end, the end being to end the war.

Ordinarily, this would be the place where I would argue for one or another categorization of the concept of genocide, such that these apparent exceptions are finessed into a rendering of a coherent whole, either decisively rejected as cases of genocide or decisively included.

But I will not do that. What I would prefer to do is examine the concept of genocide as a perspicuous region in conceptual space, following the methods in the previous post. Perhaps that will have to wait for a different installment.

**

*Her thesis has to be slightly complicated once you factor in G.W. Bush’s neo-conservative moralism when he argued in favor of the second invasion of Iraq in 2002 — but only slightly. History shows that that policy decision was driven by other factors — as I experience flashbacks to Condeleeza Rice’s “smoking gun mushroom cloud”, Colin Powell’s credibility-deflating testimony before the UN, and the bewilderment of the intelligence community reflected in the Downing Street Memo, and John Bolton’s ongoing impulse-control problems. Still, even if you grant that neo-conservatism certainly sold itself as a moralistic doctrine, it appears as a historical blip. And there is probably no surer evidence of this fact than Samantha Power herself was ousted from her position as representative to the UN during the crypto-isolationistic Trump administration.

Potted summary: “Reasoning About Categories in Conceptual Spaces”

What follows is a short summary of the main elements of a paper written by Peter Gardenfors (Lund) & Mary-Anne Williams (Newcastle) in their paper from 2001, “Reasoning About Categories in Conceptual Spaces”. It contains a way of thinking about concepts and categorization that seems quite lovely to me, as it captures something about the meat and heft of discussions of cognition, ontology, and lexical semantics by deploying a stock of spatial metaphors that is accessible to most of us. I confess that I cannot be sure I have understood the paper in its entirety (and if I have not, feel free to comment below). But I do think the strategy proposed in their paper deserves wider consideration in philosophy. So what follows is my attempt to capture the essential first four sections of the paper in Tractarian form.

  1. An object is a function of the set of all its qualities. (For example, a song is composed of a set of notes.)
    1. Every quality occurs in some domain(s) of evaluation. (e.g., each tone has a pitch, frequency, etc.)
    2. A conceptual space is a set of evaluative domains or metrics. (So, the conceptual space around a particular song is the set of metrics used to gauge its qualities: pitch, frequency, etc.)
    3. Just like ordinary space, a conceptual space contains points and regions. Hence, an object is a point in conceptual space.
    4. We treat some objects as prototypes with respect to the part of conceptual space they are in (e.g., the prototype of a bird is a robin.)
      1. Those objects which have been previously encountered (e.g., in inductive fashion), and their location registered, are exemplars.
  2. A concept is a region in conceptual space.
    1. Some of those regions are relatively amorphous, reflecting the fact that some concepts are not reliable and relevant in the judgments we make. (e.g., a Borgesian concept.)
    2. Categorization identifies regions of conceptual space with a structure. e.g., in our folk taxonomy, we have super-ordinate, basic, and sub-ordinate categories.
      • Super-ordinate categories are abstract (fewer immediate subcategories, high generality, e.g., ‘furniture’); basic categories are common-sense categories (lots of immediate subcategories, medium generality; e.g., ‘chairs’); and sub-ordinate categories are detail-oriented (few immediate subcategories, low generality; e.g., ‘Ikea-bought chaise-longue’).
    3. The boundaries of a category are chosen or “built”, depending on the structure that is identified with the concept in the context of the task. They can be classical (“discrete”) boundaries, or graded, or otherwise, depending on the demands of content, context, and choice.
    4. The structure of a conceptual space is determined by the similarity relations (“distances“) between points (or regions) in that space.
    5. One (but only one) useful way of measuring distance in a conceptual space is figuring out the distance between cases and prototypes, which are especially salient points in conceptual space.
      • Every prototype has a zone of influence. The size of that zone is determined by any number of different kinds of considerations.
  3. There are at least three kinds of structure: connectedness, projectability (“star-shapedness”), and perspicuity (“convexity”).
    1. A conceptual region is connected so long as it is not the disjoint union of two non-empty closed sets. By inference, then, a conceptual region is disconnected so long as its constituents each contain a single cluster, the sets intersect, but the intersection is empty. For example, the conceptual region that covers “the considered opinions of Margaret Wente” is disconnected, since the intersection of those sets is empty.
    2. Projectability (they call it ‘star-shapedness’) means that, for a particular given case, and all points in a conceptual space, the distance between all points and the case do not exit the space.
      1. For example, consider the concept of “classic works of literature”, and let “For Whom the Bell Tolls” be a prototype; and reflect on the aesthetic qualities and metrics that would make it so. Now compare that concept and case to “Naked Lunch”, which is a classic work of literature which asks to be read in terms of exogenous criteria that have little bearing on what counts as a classic work of literature. There is no straight line you can draw in conceptual space between “For Whom the Bell Tolls” and “Naked Lunch” without wandering into alien, interzone territory. For the purposes of this illustration, “For Whom…” is not projectable.
    3. Perspicuity (or contiguity; they call it ‘convexity’) means all points of a conceptual space are projectable with each other.
      • By analogy, the geography of the United States is not perspicuous, because there is no location in the continental United States that is projectable (given that Puerto Rico, Hawaii, and Alaska all cross spaces that are not America).
      • According to the authors, the so-called “natural kinds” of the philosopher seem to correspond to perspicuous categories. Presumably, sub-ordinate folk categories are more likely to count as perspicuous than basic or super-ordinate ones.
  4. One mechanism for categorization is tessellation.
    1. Tessellation occurs according to the following rule: every point in the conceptual space is associated with its nearest prototype.
    2. Abstract categorizations tessellate over whole regions, not just points in a region. (Presumably, this accounts for the structure of super-ordinate categorizations.)
      1. There are at least two different ways of measuring distances between whole regions: additively weighted distance and power distance. Consider, for example, the question: “What is the distance between Buffalo and Toronto?”, and consider, “What counts as ‘Toronto’?”
        1. For non-Ontarian readers: the city of Toronto is also considered a “megacity”, which contains a number of outlying cities. Downtown Toronto, or Old Toronto, is the prototype of what counts as ‘Toronto’.
        2. Roughly speaking, an additively weighted distance is the distance between a case and the outer bounds of the prototype’s zone of influence. 2
          • So, the additively weighted distance between Buffalo and Toronto is calculated between Buffalo and the furthest outer limit of the megacity of Toronto, e.g., Mississauga, Burlington, etc.
          • The authors hold that additively weighted distances are useful in modeling the growth of concepts, given an analogy to the ways that these calculations are made in biology with respect to the growth of cells.
          • In a manner of speaking, you might think of this as the “technically correct” (albeit, expansive) distance to Toronto.
        3. Power distance measures the distance between a case and the nearest prototype, weighted by the prototype’s relative zone of influence.
          • So, the power distance between Buffalo and Toronto is a function of the distance between between Buffalo, the old city of Toronto, and the outermost limit of the megacity of Toronto. Presumably, in this context, it would mean that one could not say they are ‘in Toronto’ until they reached somewhere around Oakville.
          • This is especially useful when the very idea of what counts as ‘Toronto’ is indeterminate, since it involves weighting multiple factors and points and triangulating the differences between them. Presumably, the power distance is especially useful in constructing basic level categories in our folk taxonomy.
          • In a manner of speaking, you might think of this as the “substantially correct” distance to Toronto.
        4. So, to return to our example: the additively weighted distance from Buffalo to Toronto is relatively shorter than when we look at the power distance, depending on our categorization of the concept of ‘Toronto’.
    3. For those of you who don’t want to go to Toronto, similar reasoning applies when dealing with concepts and categorization.

Non-classical conceptual analysis in law and cognition

Some time ago I discovered a distaste for classical conceptual analysis, with its talk of individually-necessary-and-jointly-sufficient conditions for concepts. I can’t quite remember when it began — probably it was first triggered when reading Lakoff’s popular (and, in certain circles of analytic philosophy, despised) Women, Fire, and Dangerous Things; solidified in reading Croft and Cruse’s readable Cognitive Semantics; edified in my conversations with neuroscientist/philosopher Chris Eliasmith at Waterloo; and matured when reading Elijah Millgram’s brilliantly written Hard Truths. In the most interesting parts of the cognitive science literature, concepts do not play an especially crucial role in our mental life (assuming they exist at all).

Does that mean that our classic conception of philosophy (of doing conceptual analysis) is doomed? Putting aside meta-philosophical disagreements over method (e.g., x-phi and the armchair), the upshot is “not necessarily”. The only thing you really need to understand about the cognitive scientist’s enlarged sense of analysis is that it redirects the emphasis we used to place on concepts, and asks us to place renewed weight on the idea of dynamic categorization. With this slight substitution taken on board, most proposition-obsessed philosophers can generally continue as they have.

Here is a quick example. So, classical “concepts” which ostensibly possess strict boundaries — e.g., the concept of number — are treated as special cases which we decide to interpret or construe in a particular sort of way in accordance with the demands of the task. For example, the concept of “1” can be interpreted as a rational number or as a natural one, as its boundaries are determined by the evaluative criteria relevant to the cognitive task. To be sure, determining the relevant criterion for a task is a nigh-trivial exercise in the context of arithmetic, because we usually enter into those contexts knowing perfectly well what kind of task we’re up to, so the point in that context might be too subtle to be appreciable on first glance. But the point can be retained well enough by returning to the question, “What is the boundaries of ‘1’?” The naked concept does not tell us until we categorize it in light of the task, i.e., by establishing that we are considering it as a rational or a natural.

Indeed, the multiple categorizability of concepts is familiar to philosophers, as it captures the fact that we seem to have multiple, plausible interpretations of concepts in the form of definitions, which are resolved through gussied-up Socratic argument. Hence, people argue about the meaning of “knowledge” by motivating their preferred evaluative criteria, like truth, justification, belief, reliability, utility, and so on. The concept of knowledge involves all the criteria (in some amorphous sense to be described in another post), while the categorization of the concept is more definite in its intensional and extensional attributes, i.e., its definition and denotation.

The nice thing about this enlarged picture of concepts and category analysis is that seems to let us do everything we want when we do philosophy. On the one hand, it is descriptively adequate, as it covers a wider range of natural language concepts than the classical model, and hence appeals to our sympathies for the later Wittgenstein. On the other hand, it still accommodates classical categorizations, and so does not throw out the baby with the bathwater, so not really getting in the way of Frege or Russell. And it does all that while still permitting normative conceptual analysis, in the form of ameliorative explications of concepts, where our task is to justify our choices of evaluative criteria, hence doing justice to the long productive journey between Carnap and Kuhn described in Michael Friedman’s Dynamics of Reason.

While that is all nice, I didn’t really start to feel confident about the productivity of this cognitivist perspective on concepts until I started reading philosophy of law. One of the joys of reading work in the common-law tradition is that you find that there is a broad understanding that conceptual analysis is a matter of interpretation under some description. Indeed, the role of interpretation to law is a foundational point in Ronald Dworkin, which he used it to great rhetorical effect in Law’s Empire. But you can find it also at the margins of HLA Hart’s The Concept of Law, as Hart treats outlying cases of legal systems (e.g., international law during the 1950’s) as open to being interpreted as legal systems, and does not dismiss them as definitely being near-miss cases of law. Here, we find writers who know how to do philosophy clearly, usefully, and (for the most part) unpretentiously. The best of them understand the open texture of concepts, but do not see this as reason to abandon logical and scholarly rigor. Instead, it leads them to ask further questions about what counts as rigor in light of the cognitive and jurisprudential tasks set for them. There is a lot to admire about that.

Divergent borderline cases

I’ve been thinking about a previous post, on borderline law, and thought maybe it would be worth elaborating a little on the remarks there, just in case they were too perfunctory.

Almost every core theoretical disagreement in philosophy of law (and, probably, philosophy) comes down to arguments over something kind of like focal meaning. (“A Gettier case either is, or is not, a case of knowledge qua knowledge; let’s fight about it”, etc.) Or, if the idea of focal meaning is too metaphysics-y — because, Aristotle thought they had to do with natural kinds, and, (mumble mumble, digression digression) — we can instead say that theoretical disagreements about major philosophical concepts are about graded categories and exemplars.

Graded conceptual analysis has at least two benefits. First, it captures the sense in which it is possible for two people to actually disagree about the same concept without radically misunderstanding each other. That is, it disarms Dworkin’s semantic sting. Second, relatedly, it encourages a kind of modesty in one’s critical ambitions, as borderline cases are permitted in discourse but regarded as especially vulnerable to subjective interpretation.

But there are some downsides to doing graded conceptual analysis. For one thing, a lot of the evaluative-critical import gets lost. So, e.g., when you say, “Kafkan law is a borderline case of law”, the implied criticism pales in comparison to a claim like “Kafkan law is not actually law”. Disputes over the former claim, pro vs. con, look to be trivial. Moreover, we cannot rescue that critical import by definitely asserting that some token case is definitely a near-miss, or a pseudo-legal system. For a borderline case is one that is, by its nature, either a near-miss or a peripheral case, and we can’t tell which. If we say, “Kafkan law is a near-miss case of law”, we abandon graded categorization, along with all the salutary features of that sort of conceptual analysis.

The way of bringing the critical sting back into talk about graded concepts requires us to talk about their directionality. Kafkan law is not just a borderline case — it is a borderline case that is (in some suitable sense) drifting away from the central cases of law considered as tacit or explicit verdicts of institutional sources. Put in this way, we remain neutral on the question of whether or not para-legal systems, considered as a class, actually have (or can be forseen to continue to have) the status of being actually legal systems. The worry is localized on the token cases that are at risk of drifting beyond para-legality into pseudo-legality — they may or may not actually be legal systems now, but they are destined to lose that status of law soon enough.

And a reasonable person might worry that many contemporary political-legal systems are headed in that direction, into the twilight of law (to borrow John Gardner’s evocative phrase). But if the argument aims to tell us what law actually is, then the weight of that argument has (apparently) got to go beyond talking about either the endurance or subversion of secondary rules of the legal system. Or, at any rate, it has got to go farther than to say that any social system which has defective rules of recognition encoded in the practices of the core of the judiciary.

(So, e.g., a disquieting feature of America’s drift from the central cases of legality, it seems to me, is the loss of a sense of what Jules Coleman called identification rules: it seems to me that the loss of both identification rules and secondary rules would be sufficient to make a legal system a divergent case. Though I shall have to leave an argument for that for another post.)

Les Green on borderline law

Here’s Les Green on the importance of unwritten constitutions.

The main difficulty I have with his commentary is this. I can imagine a critic — Green’s dialectical opposite, Maur Red — saying, “Look, okay, so the US is a borderline case of law. Who cares? It’s still law.” If asked to clarify, Red could say: “What’s at stake here is not whether US law is a form of law, but whether or not it is an exemplar, an instance of the focal meaning of law. These are different issues.

As I imagine the conversation going, I think Red could then chastize Green for overspeaking when he claims that this entails that US law is not “actually” law, because nothing at all follows from concluding that US law is a borderline case of law. For that is apparently no more defensible than saying, e.g., that penguins are not really birds, given that penguins are a borderline case of birds, or that the half-competent doctor is not really a doctor, given that the doctor qua doctor makes no errors.

What Green should say, instead, is that US law is on the verge of being a near-miss case of law, which is a special kind of borderline case. And Red might concede that that would be worrisome. But then, he might conclude, you cannot infer that something is a near-miss case of law just because you deny that it has the qualities of an exemplar case, any more than you can infer “penguins are not birds” from “penguins are not robins or bluejays (etc.)” Only some borderline cases are near-misses. Others are just odd, ironic, or unexpected.