Potted summary: “Reasoning About Categories in Conceptual Spaces”

What follows is a short summary of the main elements of a paper written by Peter Gardenfors (Lund) & Mary-Anne Williams (Newcastle) in their paper from 2001, “Reasoning About Categories in Conceptual Spaces”. It contains a way of thinking about concepts and categorization that seems quite lovely to me, as it captures something about the meat and heft of discussions of cognition, ontology, and lexical semantics by deploying a stock of spatial metaphors that is accessible to most of us. I confess that I cannot be sure I have understood the paper in its entirety (and if I have not, feel free to comment below). But I do think the strategy proposed in their paper deserves wider consideration in philosophy. So what follows is my attempt to capture the essential first four sections of the paper in Tractarian form.

  1. An object is a function of the set of all its qualities. (For example, a song is composed of a set of notes.)
    1. Every quality occurs in some domain(s) of evaluation. (e.g., each tone has a pitch, frequency, etc.)
    2. A conceptual space is a set of evaluative domains or metrics. (So, the conceptual space around a particular song is the set of metrics used to gauge its qualities: pitch, frequency, etc.)
    3. Just like ordinary space, a conceptual space contains points and regions. Hence, an object is a point in conceptual space.
    4. We treat some objects as prototypes with respect to the part of conceptual space they are in (e.g., the prototype of a bird is a robin.)
      1. Those objects which have been previously encountered (e.g., in inductive fashion), and their location registered, are exemplars.
  2. A concept is a region in conceptual space.
    1. Some of those regions are relatively amorphous, reflecting the fact that some concepts are not reliable and relevant in the judgments we make. (e.g., a Borgesian concept.)
    2. Categorization identifies regions of conceptual space with a structure. e.g., in our folk taxonomy, we have super-ordinate, basic, and sub-ordinate categories.
      • Super-ordinate categories are abstract (fewer immediate subcategories, high generality, e.g., ‘furniture’); basic categories are common-sense categories (lots of immediate subcategories, medium generality; e.g., ‘chairs’); and sub-ordinate categories are detail-oriented (few immediate subcategories, low generality; e.g., ‘Ikea-bought chaise-longue’).
    3. The boundaries of a category are chosen or “built”, depending on the structure that is identified with the concept in the context of the task. They can be classical (“discrete”) boundaries, or graded, or otherwise, depending on the demands of content, context, and choice.
    4. The structure of a conceptual space is determined by the similarity relations (“distances“) between points (or regions) in that space.
    5. One (but only one) useful way of measuring distance in a conceptual space is figuring out the distance between cases and prototypes, which are especially salient points in conceptual space.
      • Every prototype has a zone of influence. The size of that zone is determined by any number of different kinds of considerations.
  3. There are at least three kinds of structure: connectedness, projectability (“star-shapedness”), and perspicuity (“convexity”).
    1. A conceptual region is connected so long as it is not the disjoint union of two non-empty closed sets. By inference, then, a conceptual region is disconnected so long as its constituents each contain a single cluster, the sets intersect, but the intersection is empty. For example, the conceptual region that covers “the considered opinions of Margaret Wente” is disconnected, since the intersection of those sets is empty.
    2. Projectability (they call it ‘star-shapedness’) means that, for a particular given case, and all points in a conceptual space, the distance between all points and the case do not exit the space.
      1. For example, consider the concept of “classic works of literature”, and let “For Whom the Bell Tolls” be a prototype; and reflect on the aesthetic qualities and metrics that would make it so. Now compare that concept and case to “Naked Lunch”, which is a classic work of literature which asks to be read in terms of exogenous criteria that have little bearing on what counts as a classic work of literature. There is no straight line you can draw in conceptual space between “For Whom the Bell Tolls” and “Naked Lunch” without wandering into alien, interzone territory. For the purposes of this illustration, “For Whom…” is not projectable.
    3. Perspicuity (or contiguity; they call it ‘convexity’) means all points of a conceptual space are projectable with each other.
      • By analogy, the geography of the United States is not perspicuous, because there is no location in the continental United States that is projectable (given that Puerto Rico, Hawaii, and Alaska all cross spaces that are not America).
      • According to the authors, the so-called “natural kinds” of the philosopher seem to correspond to perspicuous categories. Presumably, sub-ordinate folk categories are more likely to count as perspicuous than basic or super-ordinate ones.
  4. One mechanism for categorization is tessellation.
    1. Tessellation occurs according to the following rule: every point in the conceptual space is associated with its nearest prototype.
    2. Abstract categorizations tessellate over whole regions, not just points in a region. (Presumably, this accounts for the structure of super-ordinate categorizations.)
      1. There are at least two different ways of measuring distances between whole regions: additively weighted distance and power distance. Consider, for example, the question: “What is the distance between Buffalo and Toronto?”, and consider, “What counts as ‘Toronto’?”
        1. For non-Ontarian readers: the city of Toronto is also considered a “megacity”, which contains a number of outlying cities. Downtown Toronto, or Old Toronto, is the prototype of what counts as ‘Toronto’.
        2. Roughly speaking, an additively weighted distance is the distance between a case and the outer bounds of the prototype’s zone of influence. 2
          • So, the additively weighted distance between Buffalo and Toronto is calculated between Buffalo and the furthest outer limit of the megacity of Toronto, e.g., Mississauga, Burlington, etc.
          • The authors hold that additively weighted distances are useful in modeling the growth of concepts, given an analogy to the ways that these calculations are made in biology with respect to the growth of cells.
          • In a manner of speaking, you might think of this as the “technically correct” (albeit, expansive) distance to Toronto.
        3. Power distance measures the distance between a case and the nearest prototype, weighted by the prototype’s relative zone of influence.
          • So, the power distance between Buffalo and Toronto is a function of the distance between between Buffalo, the old city of Toronto, and the outermost limit of the megacity of Toronto. Presumably, in this context, it would mean that one could not say they are ‘in Toronto’ until they reached somewhere around Oakville.
          • This is especially useful when the very idea of what counts as ‘Toronto’ is indeterminate, since it involves weighting multiple factors and points and triangulating the differences between them. Presumably, the power distance is especially useful in constructing basic level categories in our folk taxonomy.
          • In a manner of speaking, you might think of this as the “substantially correct” distance to Toronto.
        4. So, to return to our example: the additively weighted distance from Buffalo to Toronto is relatively shorter than when we look at the power distance, depending on our categorization of the concept of ‘Toronto’.
    3. For those of you who don’t want to go to Toronto, similar reasoning applies when dealing with concepts and categorization.

Non-classical conceptual analysis in law and cognition

Some time ago I discovered a distaste for classical conceptual analysis, with its talk of individually-necessary-and-jointly-sufficient conditions for concepts. I can’t quite remember when it began — probably it was first triggered when reading Lakoff’s popular (and, in certain circles of analytic philosophy, despised) Women, Fire, and Dangerous Things; solidified in reading Croft and Cruse’s readable Cognitive Semantics; edified in my conversations with neuroscientist/philosopher Chris Eliasmith at Waterloo; and matured when reading Elijah Millgram’s brilliantly written Hard Truths. In the most interesting parts of the cognitive science literature, concepts do not play an especially crucial role in our mental life (assuming they exist at all).

Does that mean that our classic conception of philosophy (of doing conceptual analysis) is doomed? Putting aside meta-philosophical disagreements over method (e.g., x-phi and the armchair), the upshot is “not necessarily”. The only thing you really need to understand about the cognitive scientist’s enlarged sense of analysis is that it redirects the emphasis we used to place on concepts, and asks us to place renewed weight on the idea of dynamic categorization. With this slight substitution taken on board, most proposition-obsessed philosophers can generally continue as they have.

Here is a quick example. So, classical “concepts” which ostensibly possess strict boundaries — e.g., the concept of number — are treated as special cases which we decide to interpret or construe in a particular sort of way in accordance with the demands of the task. For example, the concept of “1” can be interpreted as a rational number or as a natural one, as its boundaries are determined by the evaluative criteria relevant to the cognitive task. To be sure, determining the relevant criterion for a task is a nigh-trivial exercise in the context of arithmetic, because we usually enter into those contexts knowing perfectly well what kind of task we’re up to, so the point in that context might be too subtle to be appreciable on first glance. But the point can be retained well enough by returning to the question, “What is the boundaries of ‘1’?” The naked concept does not tell us until we categorize it in light of the task, i.e., by establishing that we are considering it as a rational or a natural.

Indeed, the multiple categorizability of concepts is familiar to philosophers, as it captures the fact that we seem to have multiple, plausible interpretations of concepts in the form of definitions, which are resolved through gussied-up Socratic argument. Hence, people argue about the meaning of “knowledge” by motivating their preferred evaluative criteria, like truth, justification, belief, reliability, utility, and so on. The concept of knowledge involves all the criteria (in some amorphous sense to be described in another post), while the categorization of the concept is more definite in its intensional and extensional attributes, i.e., its definition and denotation.

The nice thing about this enlarged picture of concepts and category analysis is that seems to let us do everything we want when we do philosophy. On the one hand, it is descriptively adequate, as it covers a wider range of natural language concepts than the classical model, and hence appeals to our sympathies for the later Wittgenstein. On the other hand, it still accommodates classical categorizations, and so does not throw out the baby with the bathwater, so not really getting in the way of Frege or Russell. And it does all that while still permitting normative conceptual analysis, in the form of ameliorative explications of concepts, where our task is to justify our choices of evaluative criteria, hence doing justice to the long productive journey between Carnap and Kuhn described in Michael Friedman’s Dynamics of Reason.

While that is all nice, I didn’t really start to feel confident about the productivity of this cognitivist perspective on concepts until I started reading philosophy of law. One of the joys of reading work in the common-law tradition is that you find that there is a broad understanding that conceptual analysis is a matter of interpretation under some description. Indeed, the role of interpretation to law is a foundational point in Ronald Dworkin, which he used it to great rhetorical effect in Law’s Empire. But you can find it also at the margins of HLA Hart’s The Concept of Law, as Hart treats outlying cases of legal systems (e.g., international law during the 1950’s) as open to being interpreted as legal systems, and does not dismiss them as definitely being near-miss cases of law. Here, we find writers who know how to do philosophy clearly, usefully, and (for the most part) unpretentiously. The best of them understand the open texture of concepts, but do not see this as reason to abandon logical and scholarly rigor. Instead, it leads them to ask further questions about what counts as rigor in light of the cognitive and jurisprudential tasks set for them. There is a lot to admire about that.

Divergent borderline cases

I’ve been thinking about a previous post, on borderline law, and thought maybe it would be worth elaborating a little on the remarks there, just in case they were too perfunctory.

Almost every core theoretical disagreement in philosophy of law (and, probably, philosophy) comes down to arguments over something kind of like focal meaning. (“A Gettier case either is, or is not, a case of knowledge qua knowledge; let’s fight about it”, etc.) Or, if the idea of focal meaning is too metaphysics-y — because, Aristotle thought they had to do with natural kinds, and, (mumble mumble, digression digression) — we can instead say that theoretical disagreements about major philosophical concepts are about graded categories and exemplars.

Graded conceptual analysis has at least two benefits. First, it captures the sense in which it is possible for two people to actually disagree about the same concept without radically misunderstanding each other. That is, it disarms Dworkin’s semantic sting. Second, relatedly, it encourages a kind of modesty in one’s critical ambitions, as borderline cases are permitted in discourse but regarded as especially vulnerable to subjective interpretation.

But there are some downsides to doing graded conceptual analysis. For one thing, a lot of the evaluative-critical import gets lost. So, e.g., when you say, “Kafkan law is a borderline case of law”, the implied criticism pales in comparison to a claim like “Kafkan law is not actually law”. Disputes over the former claim, pro vs. con, look to be trivial. Moreover, we cannot rescue that critical import by definitely asserting that some token case is definitely a near-miss, or a pseudo-legal system. For a borderline case is one that is, by its nature, either a near-miss or a peripheral case, and we can’t tell which. If we say, “Kafkan law is a near-miss case of law”, we abandon graded categorization, along with all the salutary features of that sort of conceptual analysis.

The way of bringing the critical sting back into talk about graded concepts requires us to talk about their directionality. Kafkan law is not just a borderline case — it is a borderline case that is (in some suitable sense) drifting away from the central cases of law considered as tacit or explicit verdicts of institutional sources. Put in this way, we remain neutral on the question of whether or not para-legal systems, considered as a class, actually have (or can be forseen to continue to have) the status of being actually legal systems. The worry is localized on the token cases that are at risk of drifting beyond para-legality into pseudo-legality — they may or may not actually be legal systems now, but they are destined to lose that status of law soon enough.

And a reasonable person might worry that many contemporary political-legal systems are headed in that direction, into the twilight of law (to borrow John Gardner’s evocative phrase). But if the argument aims to tell us what law actually is, then the weight of that argument has (apparently) got to go beyond talking about either the endurance or subversion of secondary rules of the legal system. Or, at any rate, it has got to go farther than to say that any social system which has defective rules of recognition encoded in the practices of the core of the judiciary.

(So, e.g., a disquieting feature of America’s drift from the central cases of legality, it seems to me, is the loss of a sense of what Jules Coleman called identification rules: it seems to me that the loss of both identification rules and secondary rules would be sufficient to make a legal system a divergent case. Though I shall have to leave an argument for that for another post.)

Les Green on borderline law

Here’s Les Green on the importance of unwritten constitutions.

The main difficulty I have with his commentary is this. I can imagine a critic — Green’s dialectical opposite, Maur Red — saying, “Look, okay, so the US is a borderline case of law. Who cares? It’s still law.” If asked to clarify, Red could say: “What’s at stake here is not whether US law is a form of law, but whether or not it is an exemplar, an instance of the focal meaning of law. These are different issues.

As I imagine the conversation going, I think Red could then chastize Green for overspeaking when he claims that this entails that US law is not “actually” law, because nothing at all follows from concluding that US law is a borderline case of law. For that is apparently no more defensible than saying, e.g., that penguins are not really birds, given that penguins are a borderline case of birds, or that the half-competent doctor is not really a doctor, given that the doctor qua doctor makes no errors.

What Green should say, instead, is that US law is on the verge of being a near-miss case of law, which is a special kind of borderline case. And Red might concede that that would be worrisome. But then, he might conclude, you cannot infer that something is a near-miss case of law just because you deny that it has the qualities of an exemplar case, any more than you can infer “penguins are not birds” from “penguins are not robins or bluejays (etc.)” Only some borderline cases are near-misses. Others are just odd, ironic, or unexpected.

Premier Ford and the rule of law

Recently, a constitutional challenge arose in the province of Ontario, as the newly elected Conservative Premier sought to pass a Bill to interfere with Toronto municipal elections mid-cycle to settle a few scores in his old stomping grounds. Problems arose when the judiciary told him he was violating the Charter. Tensions ratcheted up when he invoked the little-used notwithstanding clause — section 33 — in order to overcome the decision of the Court, resulting in widespread dissent from legal professionals and from the official opposition. Just recently, Ford’s party won an appeal, as a stay was placed on the Court verdict that blocked the Bill.

For now, let’s put aside the merits of the stay or the claimed violation of the Charter. Instead, zoom in on Ford’s reason for opting out of the Canadian constitution. Focus on the rationale: “unelected judges”, filed under apparent threats to democracy. Pin this little offering to a corkboard. Put a light on it. Study under glass.

The invocation of section 33 was argued on ostensibly democratic grounds. Compare specimen to encyclopedia of modern conservative thought. The pattern of argumentation that could have been reminiscent of Jeremy Waldron’s majoritarianism, if done thoughtfully. Admittedly, it’s a weird species of argument to us, we the complacent and diffident Canadians. But the world is weird. That’s why we keep reference manuals. Gotta keep an index of all the weirds.

Now turn back to the corkboard. The actual arguments presented were a mixture of Pravda and Powerpoint. Mutant variation. Pull out the red pen.

**

  1. At bottom, a nation of laws is a nation that makes sense, whose stability can be taken for granted. We can only get the first glimmer of a sense of obligation to such institutions when we see their rules as a going concern. The stability of law is primarily achieved through judicial review, an institution where governmental rules are deliberately and carefully interpreted and maintained. The judges are curators and stewards.
  2. When we talk about our favorite form of government— democratic, monarchical, or whatever— we are tacitly making an assumption that the rulers are not being systematically misled. The sovereign’s affirmation of counsel implies they have *informed consent*. So, if a monarch is constantly fooled by a Rasputin, then it is not strictly speaking a monarchy. Similarly, if a population is fed on a diet of lies, then strictly speaking we cannot say there is a democracy. Every form of government depends on faithful expertise.

So, a democratic nation of laws presupposes judicial review in two ways. First, because judicial review produces stability that makes it possible to talk about true and false claims of legality. Second, because it provides people with informed consent to past and future rules. You can criticize or condemn the operations of the courts for all sorts of reasons. But complaining they are not elected, is not a good reason. Quite the opposite: by challenging them, you undermine democracy.

Solum’s mixed originalism

Since earlier this year Lawrence Solum testified before the Senate, now is a good time to read up on his work on constitutional originalism.

Solum (2008, “Semantic Originalism”, SSRN) argues that semantic originalism depends on the ‘clause meaning thesis’. This view states that the semantic content of the constitution is given by its conventional semantics and its pragmatics (context, division of linguistic labor, implication, and stipulations). The conventional semantics is established by its original public meaning (what he calls the ‘fixation thesis’).

The puzzle, for me, is in justifying the label of “semantic originalism”. Why semantic?

Solum makes it clear at the outset that he distinguishes between the semantic, applicative, and teleological senses of meaning, and stipulates that he’s only doing the semantic thing. (p.2-3) And that is fine and well. But then he cashes out the ostensibly semantic project partly in terms of applicative content: e.g., implicatures and stipulations. (p. 5; 54-58) And then he rejects competitor views (like Ronald Dworkin’s interpretivism) for smuggling teleology, consequences, and applications into an ostensibly semantic theory. (p.83)

Obviously this cannot work. Instead, if Solum were articulating a coherent view, he should not be calling his own originalist view a ‘semantic theory’. Perhaps he should be calling it a mixed theory of literal meaning, perhaps of an austere kind. After all, the semantics/pragmatics boundary is only of significance to a particular kind of analytic philosopher who is more obsessed with compositionality. It isn’t interesting to everyone for all purposes, and maybe isn’t even useful to everyone who cares about literal meaning. But then that would require confronting a central dogma in the philosophy of language.

Probably, the apparent incoherence of the paper is mitigated by the fact that Solum’s “Semantic Originalism” is a draft on SSRN. It’s just a draft, and goodness knows I’ve had my share of bad drafts. But it’s still a shame. I prefer long-form articles, where theorists can spell out the authoritative vision in detail, and that breadth of vision is often sacrificed in published works owing to editorial considerations. And the paper appears to be otherwise considerate, nicely written, and well-informed. It is just hard for me to reserve my disappointment in finding out that the entire programme is a house built on sand.

Quick note on Donald Black’s ‘Behavior of Law’

Reading “The behavior of law” by Donald Black, who asserts in Chapter 1 that “Law varies inversely with other [non-governmental] social control”, meaning negatively correlates, and in chapter 2 that “Law varies directly with resource stratification,” meaning positively correlates.

Put those two things together, and I am not sure I have learned anything at all about how law relates to stratification.

Economic ideology as backseat driving

I try to think of debates over governmental policy as being sort of like arguments over how to drive.

When driving, there are lots of complaints you can make as a backseat driver: e.g., depending on the conditions of the road, the obstacles ahead, and the needs of people in the car, and so on. If someone in the car is bleeding to death, then it may be reasonable to complaint that the car is going too slow; if, on the other hand, the driver is not very skillful or attentive, then it might be reasonable to advise against speeding. On this analogy, reasonable criticism has to be contextual. For instance, only a total weirdo would categorically say, “Hit the brakes!” in every context, unless they’re not in a hurry to go anywhere.

On this analogy, deficit spending is like hitting the gas, and balancing the budget is hitting the brakes. Saying “I’m a fiscal conservative” in politics is like saying you’re a Brakeist in cars. It isn’t a minimally intelligible policy position until you give a little rundown of things going on around you — the places you think we want to go, the needs of the people in the car, and the obstacles ahead, and so on.

Secrecy and imminent threat

According to US law, “top secret” means “information, the unauthorized disclosure of which reasonably could be expected to cause exceptionally grave damage to the national security” (1, sic). The Espionage Act, going back to Justice Holmes, and subsequently interpreted by the courts, rebuffs First Amendment arguments through the “imminent threat” standard (previously the “clear and present danger” test) — resonant with the famous analogy that the value of “freedom of speech” does not protect the man who yells “fire!” in a crowded theater.

Yet, reportedly, in recent cases of the Espionage Act, the prosecution has successfully argued that both the actual and expected value of leaked information to the American public are not relevant considerations. [2] By analogy, it does not matter whether the man shouting “fire” in the theater thought he saw a fire (e.g., a hallucination), and it does not matter if a reasonable person in that position would also have seen a fire (e.g., a mass delusion, or hologram). It does not even matter if there was a fire. Evidently, it only matters that yelling “fire” is not the thing to do in theaters because it upsets management.

[1] https://fas.org/irp/ops/ci/franklin0805.pdf
[2] http://nymag.com/…/intel…/2017/12/who-is-reality-winner.html