"The Neuroscience of Moral Decision-Making"
Speaking in November 2006 at the Harvard Science Center on “The Neuroscience of Moral Decision-Making,” (press release here) neurophilosopher Joshua Greene suggested that there’s no soul or immaterial mental agent that makes up your mind when you solve moral dilemmas (an mp3 of the talk is linked here). Instead, it’s a matter of how different sub-systems in your brain “duke it out,” one system keyed to immediate personal and emotional factors, the other to more abstract, quantitative and impersonal factors. Greene and others have located these systems, very roughly, by conducting brain scans on people actually engaged in making moral decisions. Example: to save five people from being killed by a trolley, you could push a (large) person off a footbridge onto the tracks, killing him but stopping the trolley. Either one person dies or five die, it’s your choice (see Greene’s website for illustrations of this trolley scenario). But who’s the you doing the choosing in this difficult dilemma? Is it anything over and above the operations of your brain as its sub-systems duke it out?
Greene says the answer in the light of neuroscience is likely no - the soul is out of a job. But this conflicts with the dualistic intuition, common to both liberals and conservatives, that we are more than our brains. As Greene put it, in cases where there’s doubt about a criminal offender’s guilt, bleeding heart liberals interested in mitigating his punishment might say “don’t blame him, blame his brain!” while skeptical conservatives might say “don’t blame his brain, blame him!”. In both cases, the person is assumed to be more than his brain, something like a supervisory soul or mind that’s either guilty or not. (See here for another presentation by Greene that has the relevant slides.)
Greene suggested, following UPenn law professor Stephen Morse, that for the law culpability is basically a matter of whether the offender is rational or not. If he’s rational, blame him, if not, blame his brain. But, Greene argued, given their dualistic intuitions, what ordinary folks really care about isn’t rationality; rather, rationality is just a presumed correlate of something deeper, namely your soul, the real you, the person (not merely the brain) who actually makes the decision.
Once we see, under pressure of neuroscientific explanations, that there likely is no immaterial soul involved in making decisions (either about moral dilemmas or whether to rape, rob, or kill), Greene argued this might influence our ideas about punishment. It might make us less interested in retribution – the idea that people deserve to suffer for their misdeeds, whether or not their suffering results in any good consequences. After all, the immaterial agent that many think is the real, deserving culprit doesn’t exist. Quoting Greene’s talk:
So according to Greene, without the soul the rationale for punishment ceases to be just deserts; instead it’s a matter of things such as deterrence and public safety. Generally, we should consider the consequences of punishment in terms of the costs and benefits to society, not simply punish for punishment's sake.
If indeed Greene is right that there’s a connection between supposing people deserve punishment and the notion of the soul, this implies that as a naturalistic, physicalist view of ourselves takes hold, we’ll start to question the whole edifice of retribution as it’s currently enshrined in our criminal justice system. But not all naturalists agree that such a result is either warranted or desirable (among them Stephen Morse and law professor Michael Moore); they think there are grounds for retribution having nothing to do with the soul. So there’s an interesting argument shaping up about whether, once we admit we’re nothing more than brains, we should be retributivists or consequentialists. Retributivists must say why it makes sense to impose suffering on offenders that needn’t serve any social purpose. Without a guilty soul to blame, it isn’t obvious why we should do so.
In this talk Greene also raises the intriguing question of the status of competing moral theories, Kant’s deontology vs. Mill’s consequentialist utilitarianism. Do they reflect alternative accounts of some external moral reality or are they just expressions of different brain systems “duking it out”? In solving moral dilemmas, is our appeal to a moral principle, whether deontological or consequentialist, anything more than a brain sub-system pleading its case? If not, does this lead us to ethical nihilism?
For a preview of Greene's answer, see his papers and dissertation (to be a book) at his website. For somewhat similar, anti-retributivist take on naturalizing morality, see Tamler Sommer's "The Illusion of Freedom Evolves." And social psychologist Jonathan Haidt is doing fascinating work on what's called social intuitionism, yet another approach to understanding the natural foundations of our ethical judgments; see in particular the papers here and here.
For Immediate Release