Regard for Reason in the Moral Mind

(published by: Oxford University Press in 2018)

Below you’ll find summaries of my book and its chapters. If you’d like to see any draft chapters, get in touch.

Abstract: The burgeoning science of ethics has produced a trend toward pessimism. Ordinary moral thought and action, we’re told, are profoundly influenced by arbitrary factors and ultimately driven by unreasoned feelings. This book counters the current orthodoxy on its own terms by carefully engaging with the empirical literature. The resulting view, optimistic rationalism, shows the pervasive role played by reason, and ultimately defuses sweeping debunking arguments in ethics. The science does suggest that moral knowledge and virtue don’t come easily. However, despite the heavy influence of automatic and unconscious processes that have been shaped by evolutionary pressures, we needn’t reject ordinary moral psychology as fundamentally flawed or in need of serious repair. Reason can be corrupted in ethics just as in other domains, but a special pessimism about morality in particular is unwarranted. Moral judgment and motivation are fundamentally rational enterprises not beholden to the passions.


“…an innovative and important contribution to moral psychology, which ought to be read by everyone in the field.”

John Doris, Cornell University, Behavioral and Brain Sciences

“May offers his smart and thorough analysis in a way that is highly accessible, though not at all at the expense of rigor…. [He] proves himself to be not only a sharp philosopher but also a leading expert on empirical work on moral knowledge and moral motivation.”

Asia Ferrin, American University, Ethics

“…a systematic, impressively thorough, and convincing defence of the viability of moral rationalism. It excels in a detailed discussion of the experimental record, coupled with exceptionally clear discussions of the commitments of moral rationalism… the best defence of moral rationalism against empirical pessimism available.”

Michael Klenk, Delft University of Technology , Metapsychology

“May undertakes a careful, measured, and systematic re-examination of the evidence that some scientifically motivated sentimentalists and others take to show that ordinary moral thought is driven by, and depends upon, affect.”

Jeanette Kennett, Macquarie University, Australasian Journal of Philosophy

“…a tremendous and much-needed intervention in the field of moral psychology.”

Robin Zheng, Yale-NUS College , Behavioral and Brain Sciences

Table of Contents

1. Empirical Pessimism
Part I: Moral Judgment & Knowledge
2. The Limits of Emotion
3. Reasoning beyond Consequences
4. Defending Moral Judgement
5. The Difficulty of Moral Knowledge
Part II: Moral Motivation & Virtue
6. Beyond Self-Interest
7. The Motivational Power of Moral Beliefs
8. Freeing Reason from Desire
9. Defending Virtuous Motivation
10. Cautious Optimism

Chapter Abstracts

Scientifically-informed theories of ordinary moral thought and action are on the rise but trend toward pessimism. Many theorists argue that ordinary moral judgment involves little reasoning or not enough to yield justified belief, while others argue that we rarely act for the right reasons. This chapter describes such sources of empirical pessimism (sentimentalism, debunking, egoism, Humeanism, and situationism). It then outlines the remaining chapters that defend the alternative, optimistic rationalism, which allows for more virtue by according reason a central role in moral psychology. While the science doesn’t suggest that moral knowledge and virtuous motivation come easily, there is no reason to reject ordinary moral psychology as fundamentally flawed. This chapter also discusses some preliminaries, such as the reason/emotion dichotomy, non-cognitivism, and how to draw on empirical research.

Empirical research apparently suggests that emotions play an integral role in moral judgment. The evidence for sentimentalism is diverse, but it is rather weak and has generally been overblown. There is no evidence that our moral concepts themselves are partly comprised of or necessarily dependent on emotions. While the moral/conventional distinction may partly characterize the essence of moral judgment, moral norms needn’t be backed by affect in order to transcend convention. Priming people with incidental emotions like disgust doesn’t make them moralize actions. Finally, moral judgment can only be somewhat impaired by damage to areas of the brain that are generally associated with emotional processing. Psychopaths, for example, exhibit both emotional and rational deficits, and the latter alone can explain any minor defects in moral cognition.

Experimental research demonstrates that moral judgment involves both conscious and unconscious reasoning or inference that is not mere post-hoc rationalization. The evidence suggests in particular that we treat as morally significant more than the consequences of a person’s actions, including characteristically deontological distinctions between: intentional vs. accidental outcomes, actions vs. omissions, and harming as a means vs. a byproduct (familiar from the Doctrine of Double Effect). And the relevant empirical evidence relies on more than responses to unrealistic moral dilemmas characteristic of the trolley problem. The result is an extremely minimal dual process model of moral judgment on which we at least compute both an action’s outcomes and the actor’s role in bringing them about. This view resembles the famous linguistic analogy (or moral grammar hypothesis) in only its least controversial aspects, particularly the emphasis on unconscious reasoning in moral cognition.

Wide-ranging debunking arguments aim to support moral skepticism based on empirical evidence (particularly of evolutionary pressures, framing effects, automatic emotional heuristics, and incidental emotions). But such arguments are subject to a Debunker’s Dilemma: they can identify an influence on moral belief that is either substantial or defective, but not both. When one identifies a genuinely defective influence on a large class of moral beliefs (e.g., framing effects), this influence is insubstantial, failing to render the beliefs unjustified. When one identifies a main basis for belief (e.g., automatic heuristics), the influence is not roundly defective. There is ultimately a trade-off for sweeping debunking arguments in ethics: identifying a substantial influence on moral belief implicates a process that is not genuinely defective. We thus lack empirical reason to believe that moral judgment is fundamentally flawed. Our dual process minds can yield justified moral beliefs despite automatically valuing more than an action’s consequences.

While empirical debunking arguments fail to support wide-ranging moral skepticism, there are more modest threats to moral knowledge. First, debunking arguments are more successful if highly selective, targeting specific sets of moral beliefs that experimental research reveals to be distinguished for morally irrelevant reasons (thus flouting consistency reasoning). Second, the science of political disagreement suggests that many ordinary people can’t claim to know what they believe about controversial moral issues. Drawing on moral foundations theory, the best examples come from disagreements between liberals and conservatives within a culture. Controversial moral beliefs at least are disputed by what one should regard as epistemic peers, at least because others are just as likely to be wrong, even if not right, due to cognitive biases that affect proponents of all ideologies, such as motivated reasoning. Still, both of these empirical threats to moral knowledge are limited.

This chapter introduces the long-standing idea that inappropriate motives, such as self-interest, can militate against virtuous motivation (acting for the right reasons). Some theorists have tried to show that we are universally egoistic by appeal to empirical research, particularly evolutionary theory, moral development, and the neuroscience of learning. However, these efforts fail and instead decades of experiments on helping behavior provide powerful evidence that we are capable of genuine altruism. We can be motivated ultimately by a concern for others for their own sake, especially when empathizing with them. The evidence does not show that empathy blurs the distinction between self and other in a way that makes helping behavior truly egoistic or non-altruistic. Whether grounded in Christian love (agape) or the Buddhist notion of no-self (anātman), such self-other merging proposals run into empirical and conceptual difficulties.

Even if we can rise above self-interest, we may just be slaves of our passions. But the motivational power of reason, via moral beliefs, has been understated, even in the difficult case of temptation. Experiments show that often when we succumb, it is due in part to a change in moral (or normative) judgment. We can see this by carefully examining a range of experiments on motivated reasoning, moral licensing, moral hypocrisy, and moral identity. Rationalization, perhaps paradoxically, reveals a deep regard for reason, to act in ways we can justify to ourselves and to others. The result is that we are very often morally motivated or exhibit moral integrity. Even when behaving badly, actions that often seem motivated by self-interest are actually ultimately driven by a concern to do what’s right.

The previous chapter showed that our beliefs about which actions we ought to perform frequently have an effect on what we do. But Humean theories, holding that all motivation has its source in desire, insist on connecting such beliefs with an antecedent motive. However, reason needn’t be a slave to the passions. We can allow moral (or normative) beliefs a more independent role to generate intrinsic desires by developing an anti-Humeanism (distinct from internalism) that is empirically sound. Since an anti-Humean theory provides perfectly ordinary and intelligible explanations of actions, Humeans have a burden to justify a more restrictive account. However, they cannot discharge this burden on empirical grounds, whether by appealing to research on neurological disorders (acquired sociopathy, Parkinson’s, and Tourette’s), the psychological properties of desire, or the scientific virtue of parsimony.

This chapter considers remaining empirical challenges to the idea that we’re commonly motivated to do what’s right for the right reasons. Two key factors threaten to defeat claims to virtuous motivation, self-interest (egoism) and arbitrary situational factors (situationism). Both threats aim to identify defective influences on moral behavior that reveal us to be commonly motivated by the wrong reasons. However, there are limits to such wide-ranging skeptical arguments. Ultimately, like debunking arguments, defeater challenges succumb to a Defeater’s Dilemma: one can identify influences on many of our morally relevant behaviors that are either substantial or arbitrary, but not both. The science suggests a familiar trade-off in which substantial influences on many morally relevant actions are rarely defective. Arriving at this conclusion requires carefully scrutinizing a range of studies, including those on framing effects, dishonesty, implicit bias, mood effects, and moral hypocrisy (vs. integrity).

This chapter briefly draws out some main lessons from the previous chapters and contains a discussion of some their implications for moral enhancement. We are capable of moral knowledge and virtue, in part because we do have a regard for reason that ultimately complicates the reason/emotion dichotomy. We do often fall short, but when we do the problem is not with moral psychology in particular but the ways in which reason can be corrupted generally. One broad implication of cautious optimism is that the best method for increasing virtue won’t target our empathy or passions to the exclusion of our (often unconscious) reasoning. However, sound arguments aren’t enough, for human beings are fallible creatures with cognitive biases and limited attention spans. An intelligent populace is necessary, but so is moral technology, such as environments that nudge people to engage in good reasoning, not rationalization, particularly during moral learning and development.