Matthew Liao is to be commended for editing Moral Brains, a fine collection showcasing truly excellent chapters by, among others, James Woodward, Molly Crocket, and Jana Schaich Borg. In addition to Liao’s detailed, fair-minded, and comprehensive introduction, the book has fourteen chapters. Of these, one is a reprint (Joshua Greene ch. 4), one a re-articulation of previously published arguments (Walter Sinnott-Armstrong ch. 14), and one a literature review (Oliveira-Souza, Zahn, and Moll ch. 9). The rest are original contributions to the rapidly developing field of neuroethics.
This volume convinced me to endorse my standing suspicion that progress in neuroethics depends on improving how we conceptualize and operationalize moral phenomena, how we increase the accuracy and precision of methods for measuring such phenomena, and which questions about these phenomena we ask in the first place. Many of the contributors point out that the neuroscience of morality has predominantly employed functional magnetic resonance imaging (fMRI) of voxel-level activation in participants making one-off deontic judgments about hypothetical cases constructed by the experimenters. This approach is liable to result in experimenter (and interpreter) myopia. Judgment is an important component of morality, but so too are perception, attention, creativity, decision-making, action, longitudinal dispositions (e.g., virtues, vices, values, and commitment to principles), reflection on and revision of judgments, and social argumentation. Someone like my father who makes moral judgments when prodded to do so but never reconsiders them, argues sincerely about their adequacy, or acts on the basis of them is a seriously deficient moral agent. Yet much of the current literature seems to presuppose that people like my father are normal members of the moral community. (He’s not. He voted for Trump in Pennsylvania.) The contributions by Jesse Prinz (cf. 1), Jeanette Kennett & Philip Gerrans (cf. 3), Julia Driver (ch. 5), Stephen Darwall (ch. 6), Crockett (ch. 10), and Schaich Borg (ch. 11) are especially trenchant on this point. (In this context, I can’t help but narcissistically recommend my recent monograph – Alfano 2016 – as a framework for better structuring future research in terms of what I contend are the five key dimensions of moral psychology: agency, patiency, sociality, reflexivity, and temporality.)
Beyond fMRI-myopia, the extant neuroethical literature tends to neglect the reverse-inference problem. This problem arises from the fact that the mapping from brain regions to psychological processes is not one-one but many-many, which means that inferring from “region X showed activation” to “process P occurred” is invalid. As of the composition of this review, the amygdala and insula were implicated in over ten percent of all neuroimaging studies indexed by www.neurosynth.org.[1] Inferring, as Greene often does, from the activation of one of these areas to a conclusion about emotion generally or a discrete emotion, such as disgust, is hopeless.
On top of this, individuating regions as large as the amygdala is unlikely to be sufficiently fine-grained for neuroethicists’ purposes. We need, therefore, to diversify the methods of neuroethics to include approaches that have better spatial resolution (e.g., the single-cell resolution made possible by CUBIC – Susaki et al. 2014) and temporal precision (e.g., electroencephalogram), as well as methods that account for interactions among systems that operate at different timescales and beyond the central nervous system (e.g., hormones and the vagus nerve).
However, many of the questions we would like to ask seem answerable only by shudderingly unethical research on humans or other primates, such as torturous and medically unnecessary surgery. To get around this problem, Schaich Borg (ch. 11) argues for the use of rodent models (including measures of oxytocin) in the study of violent dispositions towards conspecifics. In the same vein, Oliveira et al. (ch. 9) recommend using lesions in the human population as natural experiments, and Crockett advocates for studies and experimental interventions on the endocrine system related to serotonin (and, I might add as a friendly amendment, testosterone and cortisol, cf. Denson et al. 2013).
Compounding these difficulties is the fact that brain science is expensive and time-consuming. With so many questions to ask and so little human and material capital to devote to them, we are constantly forced to prioritize some questions over others. In light of the crisis of replication and reproducibility that continues to rock psychology and neuroscience, I urge that we cast a skeptical eye on clickbait-generating experimental designs built on hypotheses with near-floor prior probabilities, such as Wheatley & Haidt’s (2010) study of the alleged effects of hypnotically-induced incidental disgust (which receives an absurd amount of attention in this volume and in contemporary moral psychology more broadly). Instead, we should pursue designs built to answer structured, specific questions given the constraints we face.
We need to stop asking ham-fisted questions like, “Which leads to better moral judgments – reason or emotion?” and, “Does neuroscience support act utilitarianism or a strawman of Kantian deontology?” As Prinz argues, “reasoning and emotion work together in the moral domain,” so we should reject a model like Haidt’s social intuitionism that “dichotomizes the debate between rationalist and sentimentalist” (p. 65). Reasoning can use emotions as inputs, deliver them as outputs, and integrate them into more complex mental states and dispositions. Contrary to what Greene (cf. 4) tells us, emotion is not an on-or-off “alarm bell.” Indeed, Woodward patiently walks through the emerging evidence that the ventromedial prefrontal cortex (VMPFC), which Greene bluntly labels an “emotion” area, is the region in which diverse value inputs from various parts of the brain (including emotional inputs, but also many others) are transformed into a common currency and integrated into a cardinal (not merely categorical or even ordinal) value signal that guides judgment and decision-making.
On reflection, it should have been obvious that distinguishing categorically between reason (understood monolithically) and emotion (also understood monolithically) was a nonstarter. For one thing, “emotion” includes everything from rage and grief to boredom and nostalgia; it is far too broad a category to license generalizations at the psychological or neurological level (Lindquist et al. 2012). In addition, the brain bases of emotions such as fear and disgust often exhibit exquisitely fine-tuned responses to the evaluative properties they track (Mobbs et al. 2010). Even more to the point, in some cases, we have no problem accepting emotions as reasons or, conversely, giving reasons for the emotions we embody. In the one direction, “She feels sad; something must have reminded her of her brother’s death,” is a reasonable inference. In the other direction, there are resentments that I’ve nursed for over a decade, and I’d be happy to give you all of my reasons for doing so if you buy me a few beers.
To illustrate what I have in mind by asking structured, specific questions, consider this one: “If we want to model moral judgment in consequentialist terms, at what level of analysis should valuation attach to consequences?” This question embarks from well-understood distinctions within consequentialist theory and seeks a non-question-begging answer. Unlike Greene’s question, which pits an arbitrarily-selected version of consequentialism against an arbitrarily-selected version of deontology, this one assumes a good deal of common ground, making it possible to get specific. Greene (ch. 4) asserts that act consequentialism employs the appropriate level of analysis, but Darwall (ch. 6) plausibly contends that the evidence better fits rule consequentialism. I venture to suggest that an even better fit is motive consequentialism (Adams 1976) because negative judgments about pushing the large man off the footbridge are almost certainly driven by intuitions like, “Anyone who could bring herself to shove someone in front of a runaway trolley at a moment’s notice is a terrifying asshole.”
So which questions should neuroethicists be asking? One question that they shouldn’t be asking is, “What does current neuroscience tell us about morality?” In this verdict, I am in agreement with a plurality or perhaps even a majority of the contributors to Moral Brains. Several of the chapters barely engage with neuroscience (Kennett & Gerrans, Driver, Darwall, Liao ch. 13). These chapters are well-written, significant contributions to philosophy, but it’s unclear why they were included in a book with this title. To put it another way, it’s unclear to me why the book wasn’t titled ‘Morality and Psychology, with a Dash of Neuroscience’. This difficulty becomes clearer when we note that many of the chapters that do engage in a significant way with neuroscience end up concluding that the brain doesn’t tell us anything that we couldn’t have learned in some other way from psychological or behavioral methods (Prinz, Woodward, Greene, Kahane). Perhaps we should be asking, “What do morality and moral psychology tell us about neuroscience?”
This reversal of explanatory direction presupposes that we have a reasonably coherent conception of what morality is or does. Sinnott-Armstrong argues in the closing chapter of the volume, however, that we lack such a conception because morality is fragmented at the level of content, brain basis, and function. I conclude this review by offering a rejoinder related to function in particular. My suggestion is that the function of morality is to organize communities (understood more or less broadly) in pursuing, promoting, preserving, and protecting what matters to them via cooperation. This conception of morality is, of necessity, vague and parameterized on multiple dimensions, but it is specific enough to gain significant empirical support from cross-cultural studies of folk axiology in both psychology (Alfano 2016, ch. 5) and anthropology (Curry et al. submitted). If this is on the right track, then the considerations that members of communities can and should offer each other (what High-Church meta-ethicists call ‘moral reasons’) are considerations that favor or disfavor the pursuit, promotion, preservation, or protection of shared values, as well as meta-reasons to modify the parameters or the ranking of values. What counts as a consideration, who counts as a member of the community, which values matter, and how they are weighed – these are questions to be answered, as Amartya Sen (1985) persuasively argued, by establishing informational constraints that point to all and only the variables that should be considered by an adequate moral theory. Indeed, some of the most sophisticated arguments in Moral Brains turn on such informational constraints (e.g., Greene pp. 170-2; Kahane pp. 294-5).
This book should interest philosophers working in the areas of neuroethics, moral psychology, normative ethics, research ethics, philosophy of psychology, philosophy of mind, and decision making. It should also grab the attention of psychologists and neuroscientists working in ethics-adjacent and ethics-relevant areas. It might work as a textbook for an advanced undergraduate seminar on neuroethics, and it would certainly be appropriate for a graduate seminar on this topic. (And it has a very detailed index – a rarity these days!)
References:
Adams, R. M. (1976). Motive utilitarianism. The Journal of Philosophy, 73(14): 467-81.
Alfano, M. (2016). Moral Psychology: An Introduction. London: Polity.
Curry, O. S., Mullins, D. A., & Whitehouse, H. (submitted). Is it good to cooperate? Testing the theory of morality-as-cooperation in 60 societies. Current Anthropology.
Denson, T., Mehta, P., & Tan, D. (2013). Endogenous testosterone and cortisol jointly influence reactive aggression in women. Psychoneuroendocrinology, 38(3): 416-24.
Lindquist, K., Wager, T., Kober, H., Bliss-Moreau, E. & Feldman Barrett, L. (2012). The brain basis of emotion: A meta-analytic review. Behavioral and Brain Sciences, 35: 121-202.
Mobbs, D., Yu, R., Rowe, J., Eich, H., Feldman-Hall, O., & Dalgleish, T. (2010). Neural activity associated with monitoring the oscillating threat value of a tarantula. Proceedings of the National Academies of Science, 107(47): 20582-6.
Sen, A. (1985). Well-being, agency and freedom: The Dewey Lectures 1984. The Journal of Philosophy, 82(4): 169-221.
Susaki, E., Tainaka, K., Perrin, D., Kishino, G., Tawara, T., Watanabe, T., Yokoyama, C., Onoe, H., Eguchi, M., Yamaguchi, S., Abe, T., Kiyonari, H., Shimizu, Y., Miyawaki, A., Yokota, H., Ueda, H. (2014). Whole-brain imaging with single-cell resolution using chemical cocktails and computational analysis. Cell, 157(3): 726-39.
Wheatley, T. & Haidt, J. (2010). Hypnotic disgust makes moral judgments more severe. Psychological Science, 16(10): 780-784.
[1] Accessed 3 December 2016.
Mark Alfano. Cambridge University Press (2013).