Interview: How we tell right from wrong

Headphones
  • 07 March 2007
  • Exclusive from New Scientist Print Edition. Subscribe and get 4 free issues.
  • Ivan Semeniuk
Marc Hauser (Image: John Soares)
Marc Hauser (Image: John Soares)

Audio: We devote a full episode of our weekly podcast, SciPod, to exploring Marc Hauser's work on the "moral organ" and what it means to our notions of justice and fair play. Listen to it here (mp3 format).

In a hospital emergency room, five critically ill patients desperately need organ transplants. A healthy man walks in. Should the doctors remove his organs to save the sick five? Most people will respond in milliseconds with a resounding "No way". Now imagine an out-of-control train about to run down five workers standing on the track. There's a fork ahead, and throwing a switch could divert the train to another line on which there is only one worker. It's the same question - should we sacrifice the one to spare the other five? - yet most of us would say "yes" just as quickly. How do we make these lightning moral judgements? In his latest book, Marc Hauser argues that this ability is evidence that we are born with an innate moral faculty. He sat down to talk good and evil with Ivan Semeniuk

What is your earliest memory of right and wrong?

My recollection is stronger about my children than for me. I have a very strong memory of my eldest daughter when she was about 7 years old. We were on the beach, and she was playing nicely by herself in the sand when an older boy came along and gratuitously, it looked to me, kicked sand in her face and made her cry. I grabbed him by the arms, but when I realised I was squeezing very hard I just stopped. To me that story captures this tension we have between an instinctive, intuitive response to something wrong and a reflective part that says: OK, wait a minute, is that what we ought to do in this situation?

Are you saying there's a response to right and wrong that occurs before you've had time to think?

Yes. When we are shown pictures of a face and of a rock, we know we're seeing a person in one case and not in the other. Only after that initial sorting do we reflect on what that face means. What I call the moral faculty has that same aspect: we unconsciously deliver a response to right and wrong - and I use "unconsciously" in the same sense the linguist Noam Chomsky does in his work about language. In other words, there's something about the biology of our brains that has orchestrated a set of tools to build a moral system.

That's a big claim. How do you go about demonstrating an innate moral system?

We have to look for aspects of universality by studying damage in the brain that may lead to specific deficits. We also look at children and at animals to see if they show a precursor moral sense similar to the full-blown one we see in adult humans. We look at societal behaviour across cultures, too. Euthanasia is a good example. Active euthanasia, or mercy killing, is legally blocked by most western countries, whereas passive euthanasia, by terminating life support, is legally supported. Why should that distinction exist when the consequence is the same?

Some people justify honour killings - whether in the old South of the US or in modern Pakistan. How could a universal moral sense allow for action like this that other people find repugnant?

When biologists take a deep look at variation, they typically uncover a set of underlying mechanisms that have extraordinary power to account for variation. In the case of language, there are universal principles and then there are parameters which are set by local conditions. A lot of the variation in the moral sphere shows in action instead of judgement. So an honour killing is what some people in some contexts "do" in response to what most people would agree was some sort of violation.

Do emotions play a role in our moral sense?

There's no doubt emotions play a role. The question is whether emotions are causally necessary for moral judgement. Take psychopaths: we know they have some kind of emotional deficit because they don't seem to feel remorse or empathy for their victims. Does this deficit corrupt their moral judgement because emotions are necessary to make the right decision? Or is their moral judgement actually intact, but their behaviour screwed up because their emotions don't stop them from doing the wrong thing? My guess is the latter - that emotions flow from moral judgements and control behaviour.

Does the notion of fairness fit into all this?

There's a rich body of experimental literature to do with fairness and how people negotiate in bargaining situations. One of the most famous examples is the "ultimatum game", in which one person is a donor and the other a recipient. The donor is given $10 and has to offer the other person some part of that money. If the recipient accepts, he or she gets what was offered and the donor gets what's left. If he or she rejects what's offered, nobody gets anything. The standard model in economics is self-interest, which argues that the donor should give very little and the recipient should accept it, however small it is, because something is better than nothing.

Is that what happens?

No. Most westerners feel that if the donor has been given this money out of nowhere it's not fair to give away anything less than about 40 per cent. When this game is played across cultures, including in small-scale hunter-gatherer societies, you find the principle of fairness coming through. What varies is how much the donors give and what is rejected.

Does this tell us something about the evolutionary roots of our moral sense?

Animals cooperate with kin all the time, but there seems to be a flavour of cooperation that may be uniquely human, namely reciprocity with genetically unrelated individuals. My guess is that one of the things reciprocity requires is delayed gratification. If I give you something, I've got to be patient to wait for you to give me something in return and you have to avoid the temptation of keeping everything instead of reciprocating. Reciprocity among animals may be rare, not because it isn't advantageous but because, psychologically, animals can't do it.

Are you saying our moral sense is a consequence of the ability to think about the future?

Yes. Some aspects of what we see in the moral domain could be by-products of evolutionary processes that have nothing to do with morality per se. Our capacity for future-oriented behaviour has a dramatic impact on what we do morally. This comes back to the point about moral judgements versus actions. When most people think about morality, they think about giving in to temptation. The ability to foresee future rewards and avoid giving in is critical to morality.

Words like "temptation" suggest a link between our moral sense and religion.

What interests me is the assumption that morality and religion are synonymous. The evidence we have suggests that having a religious background makes no difference to your moral judgement. Take the runaway train problem or the organ donor case. A religious person will judge those cases exactly the same as an atheist does. The atheist could take this as an argument for the biological roots of our moral system, while a religious person might argue God or some divine power handed us the universality. What our investigations also show, however, is that religious doctrines can't explain the range and the subtleties of our moral judgements. For example, the commandment "Thou shalt not kill" simply doesn't cover all the variations people seem equipped to deal with.

Many people hope civilisation will move us ever on to higher moral states. Is this an illusion?

I guess the question is: what's really changing? For me, it's actions in the world. Current research shows, for example, that racial attitudes and prejudices just aren't going away, no matter how much we claim they are. This suggests innate biases. You're never going to get rid of them if our moral sense is biologically based. But we can have interaction between the intuitive and the reflective sides.

So improving our society would mean both accepting our biology and reflecting on it?

Yes. In the 1960s, the political philosopher John Rawls argued that we should take the notion of moral intuition seriously, but added that we had to enter into what he called "reflective equilibrium", where we think about those judgements. What you call moving to a higher moral state is where the reflective part comes in, and where moral philosophers and legal scholars have their greatest impact.

Has researching our moral sense changed you?

I think so. After you've been in academia a while, you think "I am living this great life", and while people may say what I do is a contribution to society, teaching Harvard kids is so enjoyable it's like a trip to the candy store. My wife and I travel a lot. We've lived in developing countries and seen some horrible things that happen in the poorer places of the world. I hope for a relatively early retirement so I can work on morally relevant problems.

Profile

Marc Hauser is professor of psychology and of biological anthropology and director of the Cognitive Evolution Laboratory at Harvard University. He is the author of The Evolution of Communication, Wild Minds and, most recently, Moral Minds: How nature designed our universal sense of right and wrong (HarperCollins)

From issue 2593 of New Scientist magazine, 07 March 2007, page 44-45
Printed on Sun Mar 25 16:37:53 BST 2007