Notes on J.J. Thomson’s “Normativity”

Reading Thomson’s ‘Normativity’. I have questions.

Consider the following statement:

[1.] I prefer watching ‘Game of Thrones’ over watching ‘Penny Dreadful’.

Many preferences originate in intuitions, and in that sense, begin their life as [1] or something like it. That’s fine, sort of, but there are better ways of talking about my preference. e.g.:

[2.] I prefer watching ‘Game of Thrones’ over watching ‘Penny Dreadful’, insofar as these are both good dark fantasy shows.

[3.] I prefer… insofar as these are the only good things on television right now.

[4.] I prefer… insofar as they are good dark fantasy shows considering what is on right now.

[5.] I prefer… insofar as their opening themes are good to dance to.

(Each of these involves a different kind of standard for evaluation, ranked roughly from least surprising to most surprising.) [2-5] are reflective preferences, not just intuited ones. Reflective preferences are different from intuited preferences because they are tagged with some evaluative standard by which one thing is to be ranked in relation to another. That standard is laid out after the phrase, ‘insofar as’. Which is more useful for a practical theory of decision-making, intuitive or reflective preferences? Well, taken on face value, the reflective references are more informative and in a sense more rational than [1].

But — not so fast: [2-5] are also potentially inconsistent with each other. So, e.g., maybe I dislike dark fantasy shows, but hate everything else on TV: in that case, [3] would be a false statement, but [2] and [4] would be true ones. And even if [2-4] were true, [5] could be false, just in case Penny Dreadful’s opening theme is actually better to dance to than Game of Thrones’s.

One way to resolve the question in favor of the reflective mode of articulating preferences is to say that each time you introduce a new standard of evaluation (i.e., whatever follows the “…insofar as…”), you’re actually making a brand new list, that is itself eventually broken down into intuited preferences. e.g.:

GOOD TO DANCE TO LIST

1. Penny Dreadful

2. Game of Thrones

3. Shakira

GOOD DARK FANTASY SHOWS

1. Game of Thrones

2. Penny Dreadful

3. True Blood

And then you can put these very lists on a global list of preferences that looks like this:

META-LIST

1. LIST OF GOOD DARK FANTASY SHOWS

2. LIST OF GOOD TO DANCE TO SHOWS

3. LIST OF USEFUL LIFE SKILLS

At that point, you will seemingly have given up on the intuited sense of [1], since Game of Thrones doesn’t just exist in some amorphous BETTER THAN relation to Penny Dreadful. Instead you’ll have replaced the intuited preference with something more rational and informative. Indeed, at this point, once we have all these standards of evaluation under our belt, it is tempting to say that [1] is not a rational claim about the state of my preferences at all. But that’s too quick, because [1] could just be an elliptical version of one of [2-5]. And so, one might say, it is only correct to say that [1] is not a rational claim about the state of my preferences at this point so long as there is there is no rational formula for picking out a more precise context of evaluation.

So, one might say that there is always a strict default context, in which we have to interpret sentences like [1] into their least surprising form, e.g., [2]. Will this work? I have my doubts. That is, intuitively, I doubt that charity alone will help us to identify some context-invariant mechanical *formula* that tells us what the most rational default interpretation should look like. I suspect that, at best, when confronted with intuitive preference-statements like [1], we only have *defeasible strategies for interpreting* it in terms of one or more of [2-5].

Moving on to (Chapter IV): “Suppose a person asks us, “Was St. Francis better than chocolate?” What can he mean? In what respect ‘better than’ does he mean to be asking whether St. Francis was better in that respect than chocolate? If he says, “No, no, what I am asking is not for a certain respect R whether St. Frances was better in respect R than chocolate. What I am asking is just whether St. Francis was (simply) better than chocolate,” then he hasn’t asked us any question, so it is no wonder we can’t answer it.” She suggests that there is a bifurcation between attributing simple superiority to something and attributing superiority in some respect.

I do not know why these are incompatible modes of evaluation. Saying that something is simply better than another thing does not seem to imply that there is no respect R in which an evaluation can be made. There are obviously salient respects in which one might compare the value of chocolate and St. Francis (say, when comparing the value of vulgar hedonism vs. asceticism). But when one says one is simply better than the other, they aren’t wiping these respects off the map, denying them up and down and sideways. Instead, it seems to imply that it is not rationally necessary to specify the respect R in the course of answering the question — that the respects are being left implicit and uncommitted. I think what she means to say is that it is meaningless to ask, “Is chocolate worse than St. Francis, given that there is no respect in which they might be compared?” But that’s kind of obvious.

Sophiboles: or, cases of cooperative misleading

I am still thinking about misleading and truth from an interesting and thought-provoking talk by Jennifer Saul last week. Many of my intuitions have gained form and structure from her presentation. In it, she argued that misleading and lying are not (all other things equal) morally different. Importantly, Saul suggested that misleading can be different than lying in one special subset of cases — effectively, in those contexts where the listener can be reasonably expected to have special duties to scrutinize the testimony before them, owing to the adversariality of the context and the capacity of the listener to engage in critical inquiry.

I have long had reservations about academics and the subject of truth-telling. So, here’s an essay from 2006: (http://www.butterfliesandwheels.org/…/who-needs-sophistry-…/) In it, I argued that the public assertion of certain kinds of exaggeration are sometimes both faultless and laudable. Over the past decade I have had plenty of occasion to have that thesis challenged, but am generally unpersuaded by those challenges.

In that essay I argue that philosophers and scientists frequently engage in a kind of wise exaggeration, which I have mentally given the label of “sophiboles”. That is, we faultlessly assert things in a black-and-white bivalent fashion, when the closest justified belief is much more complex. Example. According to his critics, Galileo was guilty of asserting a sophibole when he decided to cast aside fictionalist and probabilist readings of the evidence; and for what it’s worth, I’m inclined to say that he is guilty of doing right. (Anyway, this is my simplistic conception of the history, and reminds me I really ought to read Alice’s Dreger’s 2015 book, “Galileo’s Middle Finger”. But for now it’ll suffice as a toy case.)

Are sophiboles cases of misleading? Much depends on how you define “misleading”. To me, “misleading” involves distracting someone away from apprehending a true proposition that is worth caring about in a conversational context, and hence to cue belief in a falsehood, or distract away from a truth, without explicitly thereby asserting a falsehood. (It is hard not to include reference to what conversation partners care about if we are to assess them in terms of the cooperative maxims.)

Unlike most cases of misleading, sophiboles are constructively focusing our attention upon *true* beliefs worth caring about, and are not directed towards the malicious creation of false beliefs. e.g., for Galileo, the truth of the theory of heliocentrism as a model of the solar system; it is not to inculcate a false belief in the solar system. Suffice it to say, Galileo did not lie in any of this; he did not assert a falsehood. Moreover, his intention was to lead us to a truth about the world, not to lead us to a falsehood.

But that will not save his sophibole from being a case of misleading, since people in a cooperative conversation can be concerned with different things, and they can disagree about the truths worth caring about in such contexts, so long as those cross-purposes are jointly acknowledged. So, the Church — wanting Galileo to tone down his rhetoric — encouraged him to adopt a probabilist or fictionalist vernacular. Those little qualifiers (i.e., “In all probability, p…”) mattered to them. For them, Galileo was attempting to mislead away from the epistemic, or second-order, status of his claims. Galileo’s actual heliocentric claims were true, but (according to his critics) the realist statement of his claims misled people from the form of justification, and in that sense were distracting people away from an important truth about the limits of our knowledge. Galileo was misleading about something worth caring about.

To be sure, Galileo’s highly politicized insistence on realist rhetoric soon evoked an adversarial context. And, FWIW, I would even argue that he was right to be adversarial, because while neither departed from intellectual good faith, it is the case that the Church’s epistemic concerns are not so much worth caring about as the realist ones are. (There’s that famous middle finger of his.)

But that’s a historical contingency. My point is that we should be able to see the two parties continuing to accuse each other of misleading even if they had been able to maintain a cooperative dialogue. And so misleading, at least in the form of sophiboles, is generally not so bad as lying.

Against warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

There are two popular ways of responding to criticism you dislike. One is to smile serenely and say, “You’re entitled to your opinion.” This utterance often produces the sense that all parties are faultless in their disagreement, and that no-one is rationally obligated to defer to anyone else. Another is deny that your critic is has any entitlement to their opinion since they are in the wrong social position to make a justifiable assertion about some matters of fact (either because they occupy a position of relative privilege or a position of relative deprivation). Strong versions of this approach teach us that it is rational to defer to people just by looking at their social position.

A third, more plausible view is that if we want to make for productive debate, then we should talk about what it generally takes to get along. e.g., perhaps we should obey norms of respect and kindness towards each other, even when we disagree (else run the risk of descending into babel). But even this can’t be right, since mere disagreement with someone when it comes to their vital projects (that is, the things they identify with) shall always count as disrespect. If someone has adopted a belief in young earth creationism as a vital life project, and I offer a decisive challenge to that view, and they do not regard this as disrespectful, then they have not understood what has been said. (I cannot say “I disrespect your belief, but respect you,” when I full well understand that the belief is something that the person has adopted as a volitional necessity.) Hence, while it is good to be kind and respectful, and I may even have a peculiar kind of duty to be kind and respectful to the extent that it is within my powers and purposes. But people who have adopted vital life projects of that kind have no right to demand respect from me insofar as I offer a challenge to their beliefs, and hence to them as practical agents. Hence the norm of respectfulness can’t guide us, since it is unreasonable to defer in such cases. At least on a surface level, it looks like we have to have a theory of warranted deference in order to explain how that is.

For what it’s worth, I have experience with combative politics, both in the form of the politics of a radically democratic academic union and as a participant/observer of the online skeptic community. These experiences have given me ample — and sometimes, intimate — reasons to believe that these norms have the effect of trivializing debate. I think that productive debate on serious issues is an important thing, and when done right it is both the friend and ally of morality and equity (albeit almost always the enemy of expedient decision making, as reflected amusingly in the title of Francesca Polletta’s linked monograph).

***

A few months ago, one of TPM’s bloggers developed a theory which he referred to as a theory of warranted deference. The aim of the theory was to state the general conditions when we are justified in believing that we are rationally obligated to defer to others. The central point of the original article was to argue that our rational norms ought to be governed by the principle of dignity. By the principle of dignity, the author meant the following Kant-inspired maxim: “Always treat your interlocutor as being worthy of consideration, and expect to be treated in the same way.” One might add that treating someone as worthy of consideration also entails treating them as worthy of compassion.

Without belaboring the details, the upshot of the theory is that you are rational in believing that you have a [general] obligation to defer to the opinions of a group as a whole only when you’re trying to understand the terms of their vocabulary. And one important term that the group gets to define for themselves is the membership of the group itself. According to the theory, you have to defer to the group as a whole when you’re trying to figure out who counts as an insider.

Here’s an example. Suppose Bob is a non-physicist. Bob understands the word ‘physicist’ to mean someone who has a positive relationship to the study of physics. Now Bob is introduced to Joe, who is a brilliant amateur who does physics, and who self-identifies as a physicist. The question is: what is Joe, and how can Bob tell? Well, the approach from dignity tells us that Bob is not well-placed to say that Joe is a physicist. Instead, the theory tells us that Bob should defer to the community of physicists to decide what Joe is and what to call him.

***

I wrote that essay. In subsequent months, a colleague suggested to me that the theory is subject to a mature and crippling challenge. It now seems to me that the reach of the theory has exceeded its grasp.

If you assume, as I did, that any theory of warranted deference must also provide guidance on when you ought to defer on moral grounds, then the theory forces you to consider the dignity of immoral persons. e.g., if a restaurant refuses to serve potential customers who are of a certain ethnicity, then the theory says that the potential customer is rationally obligated to defer to the will of the restaurant.

But actually, it seems more plausible to say that nobody is rationally obligated to defer to the restaurant, for the following reason. If there is some sense in which you are compelled to defer in that situation, it is only because you’re compelled to do so on non-moral grounds. In that situation, it is obvious that there are no moral obligations to defer to the restaurant owners on the relevant issue; if anything, there are moral obligations to defy them on that issue, and one cannot defer to someone on something when they are in a state of defiance on that issue. Finally, if you think that moral duties provide overriding reasons for action in this case, then any deference to the restaurant is unwarranted.

Unfortunately, the principle of dignity tells you the opposite. Hence, the principle of dignity can be irrational. And hence, it is not a good candidate as a general theory of rational deference.

So perhaps, as some commenters (e.g., Ron Murphy) have suggested, the whole project is misguided.

It now occurs to me that instead of trying to lay out the conditions where people are warranted to defer, I ought to have been thinking about the conditions under which it is unwarranted to do so. It seems that the cases I find most interesting all deal with unwarranted deference: we are not warranted in deferring to Joe about who counts as a physicist, and the Young Earth Creationist is not warranted in demanding that I defer to them about Creationism.

On warranted deference [tpm]

[Originally posted at Talking Philosophy Magazine blog]

By their nature, skeptics have a hard time deferring. And they should. One of the classic (currently undervalued) selling points for any course in critical thinking is that it grants people an ability to ratchet down the level of trust that they place in others when it is necessary. However, conservative opinion to the contrary, critical thinkers like trust just fine. We only ask that our trust should be grounded in good reasons in cooperative conversation.

Here are two maxims related to deference that are consistent with critical thinking:

(a) The meanings of words are fixed by authorities who are well informed about a subject. e.g., we defer to the international community of astronomers to tell us what a particular nebula is called, and we defer to them if they should like to redefine their terms of art. On matters of definition, we owe authorities our deference.

(b) An individual’s membership in the group grants them prime facie authority to speak truthfully about the affairs of that group. e.g., if I am speaking to physicists about their experiences as physicists, then all other things equal I will provisionally assume that they are better placed to know about their subject than I am. The physicist may, for all I know, be a complete buffoon. (S)he is a physicist all the same.

These norms strike me as overwhelmingly reasonable. Both follow directly from the assumption that your interlocutor, whoever they are, deserve to be treated with dignity. People should be respected as much as is possible without doing violence to the facts.

Here is what I take to be a banal conclusion:

(c) Members of group (x) ought to defer to group (y) on matters relating to how group (y) is defined. For example, if a philosopher of science tells the scientist what counts as science, then it is time to stop trusting the philosopher.

It should be clear enough that (c) is a direct consequence of (a) and (b).

Here is a claim which is a logical instantiation of (c):

(c’) Members of privileged groups ought to defer to marginalized groups on matters relating to how the marginalized group is defined. For example, if a man gives a woman a lecture on what counts as being womanly, then the man is acting in an absurd way, and the conversation ought to end there.

As it turns out, (c’) is either a controversial claim, or is a claim that is so close to being controversial that it will reliably provoke ire from some sorts of people.

But it should not be controversial when it is understood properly. The trouble, I think, is that (c) and (c’) are close to a different kind of claim, which is genuinely specious:

(d) Members of group (x) ought to defer to group (y) on any matters relating to group (y).

Plainly, (d) is a crap standard. I ought to trust a female doctor to tell me more about my health as a man than I trust myself, or my male barber. The difference between (d) and (c) is that (c) is about definitions (‘what counts as so-and-so’), while (d) is about any old claim whatsoever. Dignity has a central place when it comes to a discussion about what counts as what — but in a discussion of bare facts, there is no substitute for knowledge.

**

Hopefully you’ve agreed with me so far. If so, then maybe I can convince you of a few more things. There are ways that people (including skeptics) are liable to screw up the conversation about warranted deference.

First, unless you are in command of a small army, it is pointless to command silence from people who distrust you. e.g., if Bob thinks I am a complete fool, then while I may say that “Bob should shut up and listen”, I should not expect Bob to listen. I might as well give orders to my cat for all the good it will do.

Second, if somebody is not listening to you, that does not necessarily mean you are being silenced. It only means you are not in a position to have a cooperative conversation with them at that time. To be silenced is to be prevented from speaking, or to be prevented from being heard on the basis of perverse non-reasons (e.g., prejudice and stereotyping).

Third, while intentionally shutting your ears to somebody else is not in itself silencing, it is not characteristically rational either. The strongest dogmatists are the quietest ones. So a critical thinker should still listen to their interlocutors whenever practically possible (except, of course, in cases where they face irrational abuse from the speaker).

Fourth, it is a bad move to reject the idea that other people have any claim to authority, when you are only licensed to point out that their authority is narrowly circumscribed. e.g., if Joe has a degree in organic chemistry, and he makes claims about zoology, then it is fine to point out the limits of his credentials, and not fine to say “Joe has no expertise”. And if Petra is a member of a marginalized group, it is no good to say that Petra has no knowledge of what counts as being part of that group. As a critical thinker, it is better to defer.

Richard Rorty on truth, deference, and assertion

Rorty, Richard. “Putnam and the Relativist Menace,” Journal of Philosophy 1993 vol. XC (9) pp. 443

One of the more frenetic topics in contemporary epistemology is warranted assertability — i.e., what it is rational to put forward as an assertion. Much of the issue depends on what the whole point of an assertoric speech act is supposed to be, and whether or not the point of assertion can be articulated in terms of constitutive norms. Some folks like Timothy Williamson argue that you are only warranted in asserting things you know, since the whole point of assertion is to transfer knowledge. If you assert something you don’t know, then the listener is entitled to resent the assertion, and (presumably) it is also rational for you to be ashamed of having made the assertion. Others argue that this is a very high bar, and that it makes more sense to say that you might be warranted to just assert a reasonable belief. If you assert something as true, without actually knowing it is true, then it might not be rational for you to be ashamed of yourself, nor does it follow that others are entitled to resent you for what you’ve said.

What does Richard Rorty think? Rorty argues that you are only warranted in asserting something so long as what you say is acceptable in a linguistic community. “So all ‘a fact of the matter about whether p is a warranted assertion’ can mean is “a fact of the matter about our ability to feel solidarity with a community that views p as warranted.”” (p.452-453) Rorty argues that the conditions where it is warranted to assert are relative to how we feel about the views that would be held by an idealized version of our own community. That is the sense in which he’s a relativist. What you say in one speech community might be assertable, and what you say in another would be totally verboten. As far as Rorty is concerned, assertability is concept that belongs to sociology and not epistemology.

For Rorty, the meaning of “our community” or “our society” is determined by common ground. For example, he uses the term “wet liberalism” to describe the community that Rorty and Putnam share, as if the fact that they both belonged to the liberal tradition was what set them into the same community. (p.452) (I don’t think that it’s necessary for us to make reference to political ideology when we talk about “our linguistic community”, but it’s at least one candidate.) Whatever criterion you use to pick out the relevant linguistic community, there is a sense that you have got to be in solidarity with that community. (453-54) The upshot: for the purposes of making a rational assertion, you’ve first got to assume you’re part of a common trust.

Now for the weird, relativist twist: Rorty thinks truth is all about deference to the idealized community of future knowers. If you say, “Rutabegas are red”, then that claim is true just in case a future idealized version of yourself would say it too. So long as Rorty is concerned with the notion of truth, he thinks we are interested in whether or not an idealized future society of knowers would affirm or deny what we’ve said. (p.450) Truth is just a vague term of approbation, synonymous with truth; and, evidently, trust is the ultimate truth-maker.

‘Newroz’ Venn diagram: pancake edition

Some of our colleagues in BC — Khalegh Mamakani andFrank Ruskey at the University of Victoria — have discovered the first symmetrical 11-dimensional Venn diagram (Newroz). It is pretty.

It is quite a busy diagram, though. I thought it would be fun to see  how it looked if it were represented in 3D.

As New Scientist shows in the above diagram, here is what an individual set  — one of the “rose’s” “petals”, if you will — looks like:

Here’s what I came up with by fiddling with GIMP:

Looks like pancakes. Delicious, semi-transparent blue pancakes.

“Subjectivities” as the coordination of affect in collective intentionality

Whenever somebody uses the word “subjectivities”, I get the willies. Let me try to say why.

John Protevi quotes Mark Fisher on the Olympics:

Cynicism is just about the only rational response to the doublethink of the McDonalds and Coca Cola sponsorship…. As Paolo Virno argues, cynicism is now an attitude that is simply a requirement for late capitalist subjectivity, a way of navigating a world governed by rules that are groundless and arbitrary. But as Virno also argues, “It is no accident … that the most brazen cynicism is accompanied by unrestrained sentimentalism.” Once the Games started, cynicism could be replaced by a managed sentimentality…. Affective exploitation is crucial to late capitalism.

I’d like to consider the statement:

(1) Cynicism is now an attitude that is simply a requirement for late capitalist subjectivity.

Here are a few reasons why I have trouble in figuring out the descriptive or critical point of statements that take this form. Since this is 1000 words, and if you’re in a “too long didn’t read” kind of mood, I’ve bolded the main conclusions.

Although I suppose there is something we might call a “late capitalist subjectivity” which applies to somebody somewhere, it can only come out as an obvious truism that this subjectivity is “cynical” so long as we are only referring to media matters, and/or urbane cultures. (Proof: any reasonably attentive person will agree that hipsters are ironic dopes and newsmedia is a bought industry.) Be that as it may, it is equally true that cynicism only works because it is successful in exploiting the optimism of the crowd.

Now let’s consider this statement:

(2) Optimism is now an attitude that is simply a requirement for late capitalist subjectivity.

Incidentally, this statement looks as though it is true, given that British police engage in — for lack of a better phrase — smile profiling.

What do we say about (2)? The first thing I’d like to say is that optimism is not a form of cynicism, or vice-versa — they’re entirely different affective orientations. (I do not know how to make sense of political affect except as playing a part in certain orientations towards politics. And it defeats the descriptive point of talking about ‘orientations’ if we think orientations are somehow overlapping in the mind of a single person.) Since optimism is a form of non-cynicism, you seemingly have a contradiction:

(3) Cynicism and non-cynicism are both attitudes that are requirements for late capitalist subjectivity.

But actually, the contradictoriness is merely apparent, not real. By analogy, one may say “There is an indeterminately long list of natural numbers written on my card, whose values are integers that number 1-5; and one of those numbers is 1”, while also saying that “One of those numbers is 2”.

(Digression: so — barring dialetheia — what would it take to refute (1)? Well, working backwards from our analogy, what one cannot say is: ‘”The only number on my card is 1″ and “The only number of my card is 2″‘, since 2 is an instance of not-1. For the very same reason, one cannot say ‘”The only number on my card is 1″ and “The only number on my card is either 2, or 3, or 4, or 5,” since that is effectively the fullest expression of the negation of the claim that the number is only 1. Now, suppose that there is a finite list of politically affective orientations, which is as follows: {Optimism, Pessimism, Cynicism, Realism}. Then, one cannot say that ‘”Cynicism is an attitude that is uniquely required for late capitalist subjectivity” and “One of these: (Optimism or Pessimism or Realism or None) is  an attitude that is uniquely required for late capitalist subjectivity”.’)

So the second thing I’d like to say is that it is not a refutation of (1) for us to assent to (2); if anything, (2) is a friendly amendment to (1).

Hence, a more nuanced statement would be:

(4) “Cynicism is an attitude that is necessary for the managers of late capitalism, just as optimism is required for those who are exploited by it.”

However, the added qualification almost completely changes the subject of what is literally said in (1). We’re no longer talking about mere subjectivities (to use that awkward phrase), we’re talking about a kind of coordination of affect. This involves speaking at a level of description that is potentially more sophisticated than this elliptical talk of ‘subjectivities’.

Interestingly, it is on these grounds that I agree with John, that the Hunger Games analogy falls apart entirely in this case. The districts of Panem are not optimists, but pessimists; the Olympic spectators are treated as optimists. Presumably, the people in the capital were optimists, but we never really met any of them in the film — most were cynics. So it looks like a different dynamic — a dynamic that is as different as that between, say, Brave New World and 1984. That’s why it’s pretty misleading to talk about subjectivities. It’s just not a truthful idiom, it obscures more than it reveals.

~

I suppose it could be countered that the point of the talk of “subjectivities” is that it tells us something about what we ought to do, or what we ought to feel. So, perhaps general statements that take the form of (1) are phrased in a general way so that they might  imply something general about the culture, even while only really strictly speaking about the affect of the media elites. But, first of all, this would be absurd: see the digression above. If you think absurd beliefs are generally not helpful to the cause of promoting freedom, then this will not like the kind of thing you can say.

And, second, it isn’t immediately obvious to me what the critical or emancipatory point is involved in making sweeping claims of that sort. I want to know what I’m supposed to do with this information, that “cynicism is now an attitude that is simply a requirement for late capitalist subjectivity”. In order for us to be motivated to make these sweeping statements, there should be some tangible rhetorical payoff. And I just don’t know what that payoff is supposed to be. e.g., is the implication that capitalism be substantially better if our overlords were more realistic? Or should the lesson be: if the powerful in society were more realistic, they wouldn’t be overlords at all? Tantalizing possibilities, both, and I don’t know if either are true. But blanket talk about ‘subjectivities’ doesn’t exactly get my sociological imagination fired up.

Using Gephi to model philosophical networks in medieval Christendom

I was playing around with the Gephi beta graph designer, and thought it would be interesting to map out the network of Christian philosophers during the period 1000-1200.

Emphasis on betweenness centrality. Betweenness centrality is a measure of a node’s centrality in a network, equal to the number of shortest paths from all vertices to all others that pass through that node. Lanfranc is hardly a well-known figure in philosophy, but he looms conspicuously large here.

 

Emphasis on degree of linkage. Nodes are emphasized depending on the number of immediate neighbors they have.

Adapted from Randall Collins’s “Sociology of Philosophies” (p.464).

That horrible crowd [B&W]

 

The hinterland of Fantasy stretches wide, giving the reader ample playground to roam and discover the richness of its territory. But no matter how fantastic a story gets, the fantasy reader will always encounter themes that are oddly familiar. When a character in the classic horror novel “The Shining” goes on a violent rampage after drinking too much alcohol, the reader gets a pretty lesson about the villainy of alcoholism. When you learn that the classic film “Invasion of the Body Snatchers” was written during the Cold War, it takes little imagination to picture collectivistic Soviets prancing down the main street of Des Moines.

Literary fantasy can’t help but affect the ways that we paint the human picture. And when we look for lessons on the human predicament in literature, we find that the two worlds — of fantasy and reality — are actually not very far apart. Graham Swift had it right when (in the novel “Waterland”) his protagonist defined “man” as a story-telling animal. The philosopher Donald Davidson got it right when he insisted that any successful communication presupposes that we already share a wide swath of shared facts. Reality soaks into narratives because intelligibility presupposes familiarity. The utterly strange, the completely alien, is not a part of a story. It is the unmaking of stories.

I am most interested in the crossroads where fantasy and reality meet. We can call this crossroads Philosophy: for philosophy has always concerned itself with the reality behind things, and (like the best literary critics) goes about explaining that reality using logic and reasoned argument. In this post I’m going to focus on the genre vaguely known as ‘dark fantasy’, where supernatural happenings occur, sometimes sexual, sometimes grotesque, and always always strange. It is fitting, then, that we should spend time to talk about the weirdest kind of people: strangers in a crowd.

Many cultures, especially Anglo-American ones, appreciate the value of personal space. People in an elevator will try to stand as far apart from each other as they can. When given a chance, men using a restroom will pick the urinal on the far side and never the one in the middle. If you make eye contact with other adults in a crowd while waiting for the bus, you should expect to receive an embarrassed look, a quick glance in the other direction, and a bit of shoe-gazing.

In these cultures, most people learn to avoid eye contact as they grow older. As children, we’re told over and over that we should resist contact with strangers: by our parents, by our teachers, and by creepy public service announcements in between our Saturday morning cartoons. The distrust of others is trained into us for our own protection.

Ignoring the risk of ethnocentrism, Elias Canetti thinks that we have an irresistible impulse to be afraid of strangers. From Crowds and Power: “There is nothing that man fears more than the touch of the unknown… Man always tends to avoid physical contact with anything strange. In the dark, the fear of an unexpected touch can amount to panic… All the distances which men create round themselves are dictated by this fear. They shut themselves in houses which no one may enter, and only there feel some measure of security…” So, if Canetti is right, then people fear being touched by strangers because they fear the force of the unknown. Strangers are unknowns, and that is why we avoid them.

Immanuel Kant agrees that we ought to fear the unknown. “[Knowledge] is the island of truth, surrounded by a wide and stormy ocean, the region of illusion, where many a fog-bank, many an iceberg, seems to the mariner on his voyage of discovery to be a new country… But before venturing upon this sea… it will not be without advantage to cast our eyes upon the chart of the land that we are about to leave, and to ask ourselves, firstly, whether we cannot rest perfectly contented with it”. The imagery he uses in the passage is suggestive – the unknown is a vast space that is full of icebergs that we might crash into. (Granted, it’s hard to imagine a stoic figure like Kant being really afraid of anything — in a contest, one imagines that the icebergs would be afraid of him. Still.)

On first blush, it seems that our lesson from Canetti and Kant is to be unnerved by the unknown. But that would be the wrong conclusion to draw. After all, people engage in crowding all the time, at any rock concert or soccer game: they participate, they cheer, they bump into each other and give noogies and rub shoulders and all the rest of it. In the context of public events, strangers become familiars. The terror of the crowd turns into a feeling of the sublime.

So there is an intimacy to crowding that, from a distance, is treated with hostility and terror; but from close-up, as comfort. Perhaps the transformation between terror and the sublime is most explicitly expressed by the grim short story, “Man of the Crowd”, by Edgar Allen Poe. In it, the reader is confronted with the character of a vacant old man. This man is driven to be in the presence of crowds at all times, who suffers when alone, and constantly roams the London streets, trying to find the companionship of the thrall. Leave it to Poe to have written a dark fantasy about every possible thing we might ever be afraid of.

But why does this happen? Canetti explains: “It is only in a crowd that man can become free of this fear of being touched. That is the only situation in which the fear changes into its opposite… The reversal of fear of being touched belongs to the nature of crowds. The feeling of relief is most striking where the density of the crowd is greatest” (15-16). Hence, it is not just that people start to trust each other through familiarity: they are there to achieve a common purpose, a common elation, which Canetti calls “the discharge”. The bigger the crowd, the more “striking” the discharge.

Hence, some crowds tend to desire expansion, like a virus infecting a population. This kind of crowd, the open crowd, “wants to seize everyone within reach… it does not recognize houses, doors or locks and those who shut themselves in are suspect…” (17) If we are to put accent on these passages, then it sounds as though Canetti would want us to think of crowds as being like The Blob, rolling across the urban landscape, absorbing hapless citizens along the way.

***

We can have a lot of fun disassembling Canetti’s attitude. As mentioned from the outset, Canetti’s work seems like it’s powered by a combination of ethnocentrism and anti-populism. Canetti’s argument also appears to be a product of its times. Contemporary writers like James Surowiecki (author of The Wisdom of Crowds) have come to the defence of crowds by arguing that aggregate opinion is statistically much more reliable than the lone voice. Surowiecki uses the term “crowd” in an idiosyncratic way, to describe the decisions of a population of individuals deciding independently.

First thing’s first: if we think that Canetti was trying to explain to us how the nastiness of crowds is an important part of the human picture, then we’d pretty much have to say that he was wrong. The image of a horrible crowd as being zombie-like, of being like The Blob, is unique to certain neurotic Anglo-American populations. We have no reason to think that it is part of the human condition. But once you put aside some of his more illustrative quotes, and look at how his account actually works, you find that Canetti’s account of social crowding is not horrified by all crowds at all. Strictly speaking, his account is just as consistent with banal and ordinary social interactions of groups — flash mobs (“quick crowds”), religious sermons (“slow crowds”), union strikes (“prohibition crowds”), bureaucratic institutions (the “closed crowd”), and parades (the “open crowd”). So, putting aside some of his darkly fantastic rhetoric, it really might tell us something interesting about the human picture.

Second, on the subject of ethnocentrism: there’s no denying that Canetti, the author of Auto-Da-Fé, had anti-populism on his mind. You can think of him as ethnocentric in the sense that he was a product of his times. But this wasn’t a fault that you can attribute to his overall outlook, since people with a different point of view were also ethnocentric. Consider the musings of the horror writer HP Lovecraft: “The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.” This is a nice passage, because it perverts Kantian language to deliver an opposite sentiment: the unknown is comforting, and to be preferred over horrible truths. I think a case could be made that Lovecraft had an opposite opinion on crowds, as well. In the Lovecraftian stories, the worst of all evils (from the deep! Cthulhu! Etc!) show up in the remote bumpkin towns and islands upon the distant horizon, far away from the throngs of citizens in urban areas. Lovecraft was scared of everything except what he thought of as civilization: sociable white men of letters. Back full circle: Lovecraft’s attitude was informed by Anglo-American ethnocentrism.

photo-1

Going by Surowiecki’s definition of “crowds”, then, Lovecraft must love certain kinds of crowds: the kinds that populate Miskatonic University. If we can see ethnocentrism occupying both sides of the case, then that indicates that we ought to blame the times these authors were writing in. Both authors were haunted by the darkest fantasies that their eras could provide, but I think they force us to go in different directions.

Adapted from an essay originally published on Butterflies and Wheels, with gratitude to Ophelia Benson.