Journal rankings and prestige bias

If you want to know about the prestige of a journal or school, there is no substitute for subjective rankings. If prestige is something you value, then the (perhaps limited) importance of these evaluations should be pretty clear. For one thing, if all other metrics of philosophical productivity are unavailable, then prestige will matter quite a lot. For another thing, the pursuit of alternative measures can be emotionally exhausting.

As it happens, I do not consider prestige to be a particularly effective sales pitch when selling the value of philosophy. It seems relatively clear to me that evaluating philosophy in terms of prestige is effectively conceding that it is a boutique discipline; as self-images go, it reeks of undignified desperation. And they are not a great reason to keep doing philosophy so long as you think philosophy is a productive activity.

Instead of prestige, people might instead look at citation rates, or ‘impact’. Presumably, those who attend to impact factors believe this idea, embedded in the notion of peer review, that the attention of experts in a discipline towards content ought to be some kind of indication that it is productive.

Impact of a journal can be measured in at least three ways: average citation, average weighted by network centrality, or h-index. Average citation is, importantly, indifferent to the volume of output; so, a journal that publishes a small amount but gets a lot of citations might have an equivalent average to one that publishes a lot but which has a lot more variability. Average weighted by network centrality means (very roughly) if two journals have the same average of citations, but one journal gets cited by a whole variety of different journals, then that journal will be ranked higher — it is more central to the network. The explanation of h-index is unintuitive enough that it resists being expressed in a parenthetical, but maybe we could think of it roughly as a journal’s ‘highest floor’. Which metric do you choose? It depends, really, on what it is that you value about impact: what it is about impact that makes it interesting, philosophically.

That said, the gulf between impact and productivity is wide. Much depends on your choice of scales, which depends on your values. So, some might think that the quality of a journal depends on whether it is willing to take risks on very good content, while others might prefer a relatively conservative approach which only publishes content for which it has absolute faith. And some might want to produce work that is relevant to non-philosophers; others might want to keep philosophy pure.* It makes an enormous difference to how we come up with rankings, and not all systems of rank are a good fit for measures of prestige. And if you don’t believe me, try looking at the h- indices for philosophy journals, and see how they relate to subjective rankings.

*[These values strike me as being about as philosophically significant as musical tastes. So, whether you prefer “alternative rock” as opposed to “classic rock” (high vs. low risk), or “genre music” vs. “pop music” (endogenous vs. exogenous uptake). And of course even the choice to pay attention to impact factors betrays an aesthetic disposition for “radio-friendly” music as opposed to the punk or indie view, but I’ve always been a pop sort of guy.]


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s