The advice model of moral truth and meaning

I recently wrote the first (and, given the lack of interest, the only) instalment of a children’s story. The tale is meant to illustrate some basic ideas in meta-ethical theory in a fun and accessible way. However, on the face of it, I only make allusions to meta-ethics, and don’t really get explicit about what model of meta-ethical theory I am advocating.

But if you’re not satisfied by mere allusions, you can always hover over the images in the story. In this way, you get a few more details on what theories are being illustrated. Here (with some editing for clarity) is what it says, along with some references to who I’m drawing on. I’m advocating what you might think of as an ‘advice model’ of moral semantics and truth.

1. Cognitivism, not emotivism.The meaning of a moral sentence, like “Stealing is wrong”, is not as obvious as it looks on first glance. Let’s assume that some moral sentences are true, and hence that ‘error theory’ is wrong. What is it that makes them so?2. Existentialism, not realism. There are no spooky moral properties in the world. Hence, moral sentences do not directly refer.
3. Deference, not reference. It is plausible to believe that moral sentences are true or false depending on whether they are spoken with the right authority. Moral sentences are true or false depending upon whether or not the sentences felicitously defer to a moral authority.  [The irreducible sense of authority attached to moral claims is something I learned from H. Sidgwick’s Method of Ethics, though come to think of it, it probably owes more to Aristotle’s Nicomachean Ethics.]
4. Epistemically objective, not subjective. However, it is not always obvious who that authority might be. In case of uncertainty, we might be tempted to say that a moral sentence is true just in case it is uttered by an authority who is giving good advice. The authority of the speaker is determined, in part, by whether or not we have a justified sense that the authority making the claim knows that the advice shall lead to good consequences.  
5. The problem of egoism. But even if moral sentences were about giving good advice to achieve the best outcomes, it isn’t obvious what outcomes count as good ones. For example, a thief can always claim that the maxim, “Stealing is wrong”, does not lead to good consequences for him.  [The argument of the moral knave, of course, belongs to D. Hume.]
6. Grounded in orientation, not psychology. It is obvious that the victims of stealing are the ones who are suffering the consequences. The question is, how can you convince the thief that the suffering of his victims should be a reason not to steal? The answer, I think, is just that moral advice is addressed to a certain kind of audience: namely, people who have a pro-social orientation towards others. Anyone who lacks a pro-social orientation will not have the ability to understand what is said in a moral claim. And at this point, the thief faces a dilemma. If he thinks the moral claim is true, then you might say one of two things to him. a) On the one hand, you might say that the thief implicitly recognizes that the moral claim entails that there is a reason for action. b) And if he persists in recognizing that the moral claim is correct, but disagrees that this entails he has a reason for action, then you might use that as grounds to say that he is unable to understand the point of the moral claim after all. [The use of ‘orientation’ as a technical term is borrowed from R. Geuss, though I don’t know that ‘orientation’ has ever been used as a sui generis category used to block a reduction to beliefs and desires.]
7. Generated by psychology: desire. Admittedly, the connection between morality and the motivations of pro-social people is still pretty obscure. Even if everybody agrees that genuine moral claims provide a reason for action for pro-social people, that says almost nothing about what it takes for moral claims to be effective in bringing about an actual intention to act. If moral claims nominally provide a reason for action, but rarely or ever compel actions among pro-social people, then we might have reason to question whether they provide a reason after all. We need, in other words, to acknowledge the role of sentiments.
8. Generated by psychology: reasons. The role of the sentiments should not be overstated. For while all must agree that reasons aren’t sufficient to bring about intentions to act in pro-social people, sentiments aren’t sufficient to explain the distinctively sane and practical quality of moral claims. If the only thing behind moral claims were expressions of ‘boo’ and ‘hooray’, then you couldn’t make arguments which appeal to evidence, or have rational conversations about what ought to be done. But that is clearly false: not all moral blame is piacular. [The non-cognitive position of ’emotivism’ probably owes the most to the formulation given by A.J. Ayer.
9. So neither reasons nor passions are individually sufficient to account for the distinctiveness of moral claims, and their efficacy in producing intentions to act. However, they may be jointly sufficient. (Some may argue that reason or passion necessarily precedes the other; but, it is more likely that they are mutually supporting. If desires produce an intention to act, we call it eudaimonia; and if the desire coming after the action, then we call it eleutheronomia.) [The division between those forms of moral cognition owes to I. Kant, though I have modified the categories for my own purposes.]
10. Prerequisite of authority. What makes moral claims true or false, we remember, is the degree to which we think they are trustworthy, conferred by the right kind of authority. And when people base their advice on little more than intuition or feeling, untempered by deliberation, we all have a basic sense that this advice is not to be trusted if there are alternatives. The reason it cannot be trusted is that the authority has no integrity; and if they lack integrity, then all other things equal, you don’t have a reason for believing what they say. [The importance of integrity was a trenchant theme in the work of B. Williams]
11. Requirements of integrity. Integrity implies two things. First, it implies that the advice given or accepted has been, in some sense, voluntarily adopted — that it is not enacted by rote. Second, integrity implies sincerity; and sincerity implies non-arbitrariness in one’s convictions. By improving the coherence of your beliefs, you become more distinctive as a person, and any moral claims that you assert begin to take on a veneer of plausibility.  [The idea of wholeheartedness owes to H. Frankfurt.]
12. One of the potential downsides of looking at moral claims as advice is that it raises a difficult question: “Who can you trust, and when?” We can gesture towards a few characteristics, like “reasonable” or “social”, but it seems as though it is a fact of the matter that people tend to trust their familiars and close associates more than they trust strangers. And if that is true, then it is very hard to see morality is the sort of thing that could apply between strangers. This is a genuine dilemma, since it cannot be taken as an item of faith that the human race shall continue along the track to have a heightened sense of maturity and enlarged sympathies.  [J.S. Mill, in Utilitarianism, articulated a kind of optimism about the moral capacity that I’m skeptical of in this passage.]

Carpe Diem & the Longer Now [tpm]

[Originally posted at Talking Philosophy Magazine blog]

So here’s the thing: I like utilitarianism. No matter what I do, no matter what I read, I always find that I am stuck in a utility-shaped box. (Here’s one reason: it is hard for me to applaud moral convictions if they treat rights as inviolableeven when the future of the right itself is at stake.) But trapped in this box as I am, sometimes I put my ear to the wall and hear what people outside the box are doing. And the voices outside tell me that utilitarianism is alienating and overly demanding.

I’m going to argue that act-utilitarianism is only guilty of these things if fatalism is incorrect. If fatalism is right, then integrity is nothing more than the capacity to make sense of a world when we are possessed with limited information about the consequences of actions. If I am right, then integrity does not have any other role in moral deliberation.


Supposedly, one of the selling points of act-utilitarianism is that it requires us to treat people impartially, by forcing us to examine a situation from some third-person standpoint and apply the principle of utility in a disinterested way. But if it were possible to do a definitive ‘moral calculus’, then we would be left with no legitimate moral choices to make. Independent judgment would be supplanted with each click of the moral abacus. It is almost as if one would need to be a Machiavellian psychopath in order to remain so impartial.

One consequence of being robbed of legitimate moral alternatives is that you might be forced to do a lot of stuff you don’t want to do. For instance, it looks as though detachment from our integrity could force us to into the squalor of excessive altruism, where we must give away anything and everything we own and hold dear. Our mission would be to maximize utility by doing works in such a way that would keep our own happiness above some subsistence minimum, and improve the lot of people who are far away. Selfless asceticism would be the order of the day.

In short, it seems like act-utilitarianism is a sanctimonious schoolteacher, that not only overrides our capacity for independent moral judgment, but also obliges us to sacrifice almost all of our more immediate interests for interests that are more remote — the people of the future, and the people geographically distant.

Friedrich Nietzsche, Samuel Scheffler, Bernard Williams: here are some passionate critics who have argued against utility in the above-stated ways. And hey, they’re not wrong. The desire to protect oneself, one’s friends, and one’s family from harm cannot simply be laughed away. Nietzsche can always be called upon to provide a mendacious quote: “You utilitarians, you, too, love everything useful only as a vehicle for your inclinations; you, too, really find the noise of its wheels insufferable?”

Well, it’s pretty clear that at least one kind of act-utilitarianism has noisy wheels. One might argue that nearer goods must be considered to have equal value as farther goods; today is just as important as tomorrow. When stated as a piece of practical wisdom, this makes sense; grown-ups need to have what Bentham called a ‘firmness of mind’, meaning that they should be able to delay gratification in order to find the most happiness out of life. But a naive utilitarian might take this innocent piece of wisdom and twist it into a pure principle of moral conduct, and hence produce a major practical disaster.

Consider the sheer number of things you need to do in order to make far-away people happy. You need to clamp down on all possible unintended consequences of your actions, and spend the bulk of your time on altruistic projects. Now, consider the limited number of things you can do to make a small number of people happy who are closest to you. You can do your damnest to seize the day, but presumably, you can only do so much to make your friends and loved ones happy without making yourself miserable in the process. So, all things considered, it would seem as though the naive utilitarian has to insist that we all turn into slaves to worlds that are other than our own — the table is tilted in the favor of the greatest number. We would have to demand that we give up on today for the sake of the longer now.

But that’s not to say that the utilitarian has been reduced to such absurdities. Kurt Baier and Henry Sidgwick are two philosophers that have explicitly defended a form of short-term egoism, since individuals are better judges of their own desires. Maybe utilitarianism isn’t such an abusive schoolteacher after all.

Why does act-utilitarianism seem so onerous? Well, if you’ve ever taken an introductory ethics class, you’re going to hear some variation on the same story. First you’ll be presented with a variety of exotic and implausible scenarios, involving threats to the wellbeing of conspecifics that are caught in a deadly Rube Goldberg machine (involving trolleys, organ harvesting, fat potholers, ill-fated hobos, etc.) When the issue is act-utilitarianism, the choice will always come down to two options: either you kill one person, or a greater number of others will die. In the thought-experiment, you are possessed with the power to avert disaster, and are by hypothesis acquainted with perfect knowledge of the outcomes of your choices. You’ll then be asked about your intuitions about what counts as the right thing to do. Despite all the delicious variety of these philosophical horror stories, there is always one constant: they tell you that you are absolutely sure that certain consequences will follow if you perform this-or-that action. So, e.g., you know for sure that the trolley will kill the one and save the five, you know for sure that the forced transplant of the Hobo’s organs will save the souls in the waiting room (and that the police will never charge you with murder), and so on.

This all sounds pretty ghoulish. And here’s the upshot: it is not intuitively obvious that the right answer in each case is to kill the one to save the five. It seems as though there is a genuine moral choice to be made.

Yet when confronted with such thought-experiments, squadrons of undergraduates have moaned: ‘Life is not like this. Choices are not so clear. We do not know the consequences.’ Sophomores are in a privileged position to see what has gone wrong with academic moralizing, since they are able to view the state of play with fresh eyes. For it is a morally important fact about the human condition that we don’t know much about the future. By imagining ourselves in a perfect state of information, we alienate ourselves from our own moral condition.

Once you see the essential disconnect between yourself and your hypothetical actor in the thought-experiment, blinders ought to fall from your eyes. It is true that I may dislike pulling the switch to change the trolley’s track, but my moral feelings should not necessarily bear on the question of what my more perfect alternate would need to do. Our so-called ‘moral intuitions’ only make a difference to the actual morality of the matter on the assumption that our judgments can reliably track the intuitions of your theoretical alternate — assuming your alternate knows the things they know, right on down to the bone. But then, this assumption is a thing that needs to be argued for, not assumed.

While we know a lot about what human beings need, our most specific knowledge about what people want is limited to our friends and intimates. That knowledge makes the moral path all the more clear. When dealing with strangers, the range of moral options is much wider than the range of options at home; after all, people are diverse in temperament and knowledge, scowl and shoe size. Moral principles arise out of uncertainty about the best means of achieving the act-utilitarian goal. Strike out uncertainty about the situation, and the range of your moral choices whittle down to a nub.

So if we had perfect information, then there is no doubt that integrity should go by the boards. But then, that’s not the fault of act-utilitarianism. After all, if we knew everything about the past and the future, then any sense of conscious volition would be impossible. This is just what fatalism tells us: free will and the angst of moral choice are byproducts of limited information, and without a sense of volition the very idea of integrity could not even arise.

Perhaps all this fatalism sounds depressing. But here’s the thing — our limited information has bizarrely romantic implications for us, understood as the creatures we are. For if we truly are modest in our ability to know and process information, and the rest of the above holds, then it is absurd to say, as Nietzsche does, that “whatever is done from love always occurs beyond good and evil”. It is hard to conceive of a statement that could be more false. For whatever is done from love, from trust and familiarity, is the clearest expression of both good and evil.


Look. Trolley-style thought-experiments do not show us that act-utilitarianism is demanding. Rather, they show us that increased knowledge entails increased responsibility. Since we are the limited sorts of creatures that we are, we need integrity, personal judgment, and moral rules to help us navigate the wide world of moral choice. If the consequentialist is supposed to be worried about anything, the argument against them ought to be that we need the above-stated things for reasons other than as a salve to heal the gaps in what we know.

Morality — whether you want it or not

Originally published at Talking Philosophy Magazine blog.

Abstract. There are some good reasons for us to use the concept of “moral realism”, in the following sense: moral realism asks us to think of morality as independent of the will. It entails moral optimism — that all other things equal, the interests of the right will triumph. Moreover, it suggests that some interests are objective because we didn’t choose them. If moral claims are “real”, it’s because they have a force whether we want them or not. Yet if moral regularities are “real”, it is because it derives from instincts (sympathy and resentment) that are independent of the will. And, perhaps, instinctive sympathy and resentment are more important than the other parts of our psychology. If so, then moral realism is defensible because moral norms hold (for morally competent observers) whether we want them or not.

Continue reading “Morality — whether you want it or not”