What are the Normative Implications of Joshua Greene’s Reduction of Rationalist Deontology?
Two classes of moral theories are consequentialism and deontology; though they posit distinct mechanisms for how humans make (and should make) moral judgments, we appear to make both characteristic consequentialist judgments and characteristic deontological judgments. For consequentialists, the only criteria for determining whether a given action is “good” or “bad” are the consequences of that action. The intention of the actor and the means of conducting the action are only relevant insofar as they relate to the overall consequences. In contrast, deontologists regard the character of a certain action as more important than its consequences; specifically, an action is either right or wrong as defined by some set of moral rules or duties. Joshua Greene, however, argues that humans do not make characteristic deontological judgments as a result of following a moral ruleset; in reality, our “deontological” judgments are simply an evolved response to particular emotional stimuli. This finding, he suggests, calls into question deontology as a plausible class of normative moral theories. While I agree with Greene’s characterization of descriptive deontological theories, I contend that his conclusions do not preclude the possible existence of a non-descriptive, normative deontological theory of morality.
One characteristically consequentialist judgment that the majority of people make pertains to the classic trolley-problem. In this scenario, a trolley is heading down a singular track to which, up ahead, five people are tied. Before the segment of track to which the people are tied, the track forks to a side-track to which only one person is tied. Research subjects are asked whether they would pull a lever to switch the trolley from the track with five people to the track with one person; the majority report that they would pull the lever and save a net of four people. Paradoxically, however, people display characteristic deontological judgments in response to a similar scenario: the footbridge-problem. A trolley is, again, heading down a singular track to which five people are tied. Instead of a secondary track, there exists a footbridge above the original track on which a large woman is standing. By pushing the woman off the bridge and onto the track — killing her in the process — it is possible to save the five on the track. The consequences of pulling the lever in the trolley case and pushing the woman in the footbridge case are equal — one person dies in order to save five lives. Yet, people’s judgments differ: people are willing to pull the lever, but the majority will not push the woman. As Greene notes, many philosophers have attempted to provide normative, deontological explanations to these results such that people’s judgments on the trolley and footbridge cases are justified. For example, it could be that our moral ruleset prohibits killing a person as a means to help someone else. However, Greene asserts that this and other attempted normative explanations have consistently been met with counterexamples, and that no explanation has been wholly satisfactory.
Greene proposes that humans’ tendency to display both characteristic consequentialist and deontological judgments tracks an analogous, psychological dual-process of generating moral judgments. Specifically, he hypothesizes that characteristic deontological judgments arise from rapid, affective intuitions, while consequentialist judgments involve more deliberative, emotionless (or “cognitive”) reasoning. In explaining people’s responses to the footbridge-case, for example, Greene proposes that people make a “deontological” judgment because pushing the person off the bridge provokes a strong emotional response while pulling the lever does not. Consequently, any normative explanation invoking rules or duties that people offer for making the “deontological” judgment will simply be a post-hoc justification of the affective impulse. The judgment, in this sense, is not deontological per se because it’s generated by instinct rather than by adhering to some abstract moral truth.
For us to believe this hypothesis, Greene must provide some objective evidence that 1) people recruit more emotional faculties when making “deontological” judgments than consequentialist judgments, and 2) humans come equipped with, or develop, the specific affective responses that track characteristic “deontological” judgments.
To the first point, Greene uses neuroimaging and reaction time data to suggest a dissociation between the emotional and “cognitive” mechanisms for generating judgments, such that “deontological” judgments recruit the emotional system, and consequentialist judgments recruit the “cognitive” system. For example, when people make close-proximity, emotionally salient moral decisions — such as, deciding whether or not to push someone in the footbridge case — neuroimaging data shows increased activity in several brain structures thought to be involved in affective processing, including the PCC, MPFC, and amygdala. In contrast, in contemplating more impersonal, less emotionally salient moral dilemmas — such as the classic trolley case — imaging data shows increased activation in characteristically “cognitive” neural structures such as the DLPFC and inferior parietal lobe. To further dissociate these two processes, Greene hypothesizes that if they were truly separate, subjects that judge salient moral violations to be okay would take longer to do so than subjects that judge them to be wrong. For instance, in the footbridge case, a subject who decides he would push the woman off the bridge would be “cognitively” overriding his negative emotional response to pushing her. This “cognitive” overriding process, theoretically, would take time. Therefore, if the dual-process model were correct, subjects who would push the woman off the bridge would take longer to respond than subjects who judge the action as wrong — those who simply follow their intuitive emotional responses. The reaction time data, as it turns out, corroborates this hypothesis: positive responses to salient moral dilemmas take significantly longer to produce than negative ones. Taken together, both the neuroimaging and RT data lend credence to the dual-process model in which emotionally salient dilemmas employ a different mechanism of generating judgments than emotionally-neutral scenarios.
In arguing that characteristic deontological judgments are — in their essence — merely post-hoc justifications of intuitive, affective responses, Greene must also show that our set of emotional responses tracks what we normally think of as deontological rules. One classic deontological case is the particular pattern of behavior people exhibit when giving money to charitable causes. From a utilitarian point of view, people have a moral obligation to help people less fortunate than them if it doesn’t incur a significant cost to themselves; yet, wealthy people spend money on luxuries all the time that, theoretically, they could have donated to a charitable cause. Thus, the majority of people do not act like pure consequentialists. In practice, it appears that people decide whom to help based on their relative proximity to the person or cause they are helping. For example, the vast majority of people, if walking by a pond, would not hesitate to save a child who was drowning in the pond. However, many of these same people choose every day not to donate to charitable causes that save the lives of starving children. The deontologist will be hard-pressed to ascribe this behavior to some set of moral rules that encourages helping some people but not others. On the other hand, this scenario fits the emotional-salience model neatly: seeing a child drowning in front of us causes an affective response that drives us to help the child. When donating to a charitable cause, the targets of our donations are often nameless and faceless, so their suffering provokes less of an emotional response, and we are less likely to donate. In both the trolley/footbridge and pond/charity pairings, we see that the scenarios that drive emotional responses and thus provoke “deontological” judgments are the ones in which the subject’s relative proximity to the person being harmed is close. In other words, moral dilemmas that are “up close and personal” (16) tend to recruit the affective system of judgment.
The finding that degree of proximity affects which system of moral judgment we use calls for explanation; Greene proposes an evolutionary account. In humans’ original adaptive environment, our intuitive responses developed in a way to behave exclusively in “up close and personal” situations. Our primate and hunter-gatherer ancestors never had the opportunity to donate money to a charity that helped a child on the other side of the world, so it follows that these types of abstract scenarios do not trigger the emotional response of an analogous “up close and personal” dilemma. In this sense, our intuitive responses can be thought of as an adaptive behavioral heuristic.
Proximity is not the only variable that can be manipulated to track how and when we make intuitive judgments. Another arena where Greene’s line of thinking can be applied is in regards to theories of punishment. A consequentialist would only favor punishments that maximize good outcomes in the future; for them, punitive action is only justified if it will lead to, for example, a deterrent effect, or the containment of a harmful criminal. For deontologists, on the other hand, punishment can be justified by and of itself — it need not produce any particular outcomes. The deontological conception of punishment involves giving criminals “what they deserve” in a retributive sense, bringing to mind the saying “the punishment should fit the crime”. Greene cites a handful of studies that suggest that while people claim to take into account both consequentialist and deontological reasons for conducting punishment, they — in practice — seem to largely ignore the consequentialist reasons. In a scenario in which subjects deemed whether it was necessary to fine a company that had committed a crime — given that fining the company would lead to overall harmful consequences — the majority of subjects decided that it was still right to impose the fine. In fact, in a range of other studies, subjects appeared to not take into account the consequences of punitive action — for example deterrent effect — and instead exhibited retributivist behavior. Since it appears that people do not punish in a typical consequentialist fashion, this raises the question: is there some variable that — like the previously observed “up close and personal” effect — tracks how people make punitive judgments? Greene cites two studies that converge on “moral outrage” as an accurate predictor of punitive judgment; if a subject is outraged by a particular transgression — that is, they have a strong, negative emotional reaction to it — the subject will punish the perpetrator more severely. This is useful in that it suggests our punitive judgments are largely the result of Haidtian affective intuitions. But simply identifying “moral outrage” does not tell the full story: what determines what people find outrageous? Is there an evolutionary explanation like there was for the trolley/footbridge phenomenon?
Greene conjectures that our moral outrage derives from an adaptive heuristic to punish non-cooperators. He describes a study in which two players play a simple game to determine how to split a certain sum of money, for example, $10. The first player is given the power to propose an offer to the second player for how to split the $10, and the second player can either accept or reject the offer. Empirically, it appears that second players typically agree to fair, or near-fair, splits like $5 and $5, and $6 and $4. However, second players, on the whole, reject “unfair” offers like $8 and $2; this is a puzzling result because they willingly forfeit free money simply to punish greedy first players. During the game, brain regions involved in emotion were found to be more active in second players after receiving unfair offers as compared to after fair offers. So again, it appears that people make non-consequentialist, “deontological” judgments based on emotional responses. In addition, argues Greene, this affective response too has an evolutionary basis. In our original adaptive environments, it was important to quickly punish non-cooperators to ensure the success of the group; specifically, “the emotions that drive us to punish wrong-doers evolved as an efficient mechanism for stabilizing cooperation” (45).
Greene’s work of reduction is powerful and convincing. As he points out, in considering whether our characteristic deontological judgments arise from an adherence to an unknown, complex set of moral rules, or from a set of adaptively-formed intuitions, the parsimonious choice is clearly the latter. Given this finding, it is important to consider its descriptive and normative implications.
Consequentialism, as a descriptive theory, does not seem plausible: considering the evidence from the footbridge problem, and people’s charitable and punitive behavior, it is clear that people do not act like pure consequentialists. Similarly, Greene makes a persuasive case that there is no particular moral ruleset we follow, but rather we act in accordance with our intuitions in many situations. Thus, rationalist deontology cannot function as a fitting descriptive theory. In all likelihood, an apt descriptive moral theory will take into account the interplay between the emotional and “cognitive” systems we use to make judgments.
The aim of Greene’s paper is not specifically to endorse consequentialism as a normative moral theory, but much of his evidence seems to, at the least, cast the consequentialist judgment as the favorable of the two options. Whereas deontological judgments are often “irrational” impulses, consequentialism relies on deliberative “cognition” — one of the defining features of humankind that separates us from other animals. Consequentialism involves notions of fairness and justice that deontological judgments seem to lack. A common misconception of consequentialism, mentions Greene, is that it does not respect the rights of individuals like deontology does. This, however, is mistaken; on the contrary, consequentialists treat the value of each person equally. Despite our uncomfortableness, for example, in pushing the woman in the footbridge case, it may be that it is necessary for the “greater good” if we are to treat each person equally and with respect. Adopting consequentialism does, though, involve some difficulties. For example, how are we able to reconcile our charitable decisions? A consequentialist would be obligated to donate most of their disposable income to people less fortunate than them — a behavior that while possible, would be uncomfortable for most people. Uncomfortability, though, is not enough in itself, and should be an expected consequence of a good normative theory. Another downfall, though, is that adopting the consequentialist worldview implicates losing a sense of the “personal”: a consequentialist mother could not care “more” for her child than for a stranger, for example. As Greene notes, this line of thinking creates a slippery slope. Theoretically, a consequentialist should even spend his time in a particular way; if listening to music or getting drinks with friends serves no functional purpose, shouldn’t he instead be obligated to spend all his free time volunteering? In this sense, there are arguments that “strong” consequentialism could be impractical as a normative moral theory. Greene makes a strong case that we would be better served to check our emotional intuitions with rational deliberation in many instances, and I agree with him on that point. I am not ready, at least from Greene’s arguments, to make any conclusions about the normative power of consequentialism.
Greene’s principal point, however, is that his empirical findings on the nature of our “deontological” judgments serve to discredit deontology as a feasible normative moral theory. It is clear that rationalist deontology is not an apt class of descriptive theories because in reality, we act in accordance with our adaptive intuitions and not a moral ruleset. But what bearing does this have on its normative plausibility? Because our intuitions are merely “blunt biological instruments,” that have a “simple and efficient design,” they do not always lead to positive or rational outcomes (46). This is clear in the footbridge case and in our punitive judgments. However, claiming that our intuitions lack normative power and therefore every deontological theory lacks normative power is to falsely equate the two. Greene argues that all deontology really is, is a rationalization of our intuitions. Similarly, he argues, deontology, while hard to define, is, in all respects, about “giv(ing) voice to powerful moral emotions” (50). I agree with Greene that in both a Kantian and descriptive sense, deontology is a rationalization of our moral emotions, and it is not apt as a normative theory. Indeed, any theory that tracks our moral intuitions will not have normative power. However, I do not grant that a deontological moral theory necessarily needs to track these adaptive intuitions. Even though theories like Kant’s attempt to justify our intuitions, other theories don’t necessarily have to. In other words, there could exist some deontological theory invoking specific rights and duties that — while it does not function as a valid descriptive theory — could serve as a normative theory. Greene has done a sufficient job of discrediting classical deontological theories that attempt to justify our observed behavior; but, I contend that he has far from proven the impossibility of some deontological theory having normative power.