Saturday, August 25, 2007

Dissecting Morality

I had been involved in a discussion concerning morality. In that discussion, I identified two ways of applying morality: moral absolutism and moral relativism. At that point someone challenged me to define morality and these two ways of applying morality.

Here is my answer:

Until recently, research in cognitive studies have been based on the assumption that decision making is a self-interest utilitarian process. Choice is based on what best serves our goal.

Recently, studies by Marc Hauser, a professor of Biological Anthropology at Harvard University, point to non-utilitarian aspects of the decision making process.

In his studies, subjects were presented with scenarios like the runaway trolley scenarios that I've previously posted in the "life crisis" forum.

A trolley looses its brakes and is rolling out of control down a hill. It is about to hit five people who can not get out of the way. Between the trolley and the five people is a track switch. If the trolley is switched to the alternate track, it would hit only one person. Is it acceptable to switch the track so that the trolley hits only one person?

Almost everyone answer the question with "yes". Hitting one person is better than hitting five.

Then, the subjects were given a new scenario:

There is no switch between the trolley and the five people. However, there is a person large enough to stop the trolley if pushed in front of the trolley. Is it acceptable to push the large person in front of the trolley to save the five people?

Almost everyone answered the question "no".

The results were consistent with people of varying religious belief, culture, ethnicity, age group, and social-economic class.

Occasionally, someone may answer yes for both. However, when dug deeper, the results are consistent with the norm.

e.g., Hauser's father is a medical doctor who is a stoic thinker. His initial response was yes for both since both scenarios resulted in saving five lives instead of one. So Hauser posed a scenario closer to home (in this case closer to work).

You have five patients who are in need of organ transplants but was unable to find matching donors. A healthy person with perfect match for all five patients. Would you sacrifice the life of the healthy donor to save the lives of the five?

His answer is, "Of course, not!"

Then, how can you push the large person in front of the trolley to save the five?

With that, Hauser's father changes his position.

Both scenario involves sacrificing one life for five, yet the latter is unacceptable. The choice made is not based on utilitarian decision making.

No only that, it is not a Pavlovian behavior. i.e., It's not a learned behavior which can be positively or negatively re-enforced. Neither choice to save the five people yielded a more favorable result. This non-utilitarian behavior is not learned but biologically hard-wired.

Hauser describes the non-utilitarian process as a hard-wired moral brake against the self-interest utilitarian decision making engine.

Another example of non-utilitarian response is the test of the self interest economy, which I posted, previously on the "life crisis" forum, as "The Greed Game".

According to Adam Smith's "Inquiry into the Nature and Causes of the Wealth of Nations", in a free market economy, the self interests of all traders would dictate the distribution of all resources.

In Professor Hauser's studies, subjects were given the roles of donor or recipient. Each donor was given a sum of money, out of which he or she must offer a portion to a recipient. The recipient can accept or reject the offer. If the recipient rejects the offer, the donor and the recipient would loose the entire sum.

If the market is driven by self-interest, all recipients would accept any offer greater than zero since the rejection would result in one not receiving anything; something is better than nothing.

The research, however, shows that if the sum is too low, the recipient would reject the offer. The posts in the "life crisis" forum yielded the same result. And like the posts in the "life crisis" forum, the research subjects identified the lack of a fair distribution as the reason for the rejection of a low offer.

For more examples scenario used in his study, take the Moral Sense Test the Harvard Cognitive Evolution Lab's web site.

http://www.wjh.harvard.edu/~mnkylab/

Subsequence research were done in several different laboratories using MRI to examine brain activities as subjects make these moral decisions. These research found that brain activities were firing in two different parts of the brain. They were firing in the part of the brain that performs logical and computational thinking. They were also firing the part of the brain that deals with emotional response.

An example of using the MRI in this research:

http://www.scielo.br/scielo.php?pid=S0004-282X2001000500001&script=sci_arttext

When the self-interest utilitarian choice wins out, part of the brain that performs logical and computational thinking is much more active than the part of the brain that deals with emotional response.

When the non-utilitarian moral response wins out, the part of the brain that deals with emotional response is much more active than the part of the brain that performs logical and computational thinking.

This result led researchers to conclude that the hard-wired moral brake in our brain is located in the part of the brain that deals with emotional response.

In fact, MRI studies of psychopath/sociopaths show a link between morally bad behavior with diminished mass of that part of the brain. See:

http://www.crimetimes.org/06a/w06ap10.htm

The interesting part is that, in the test of the self interest economy (the greed game), everyone agrees that the fair distribution is 50-50. However, the threshold for rejection is not 50-50. Before the fair distribution level is reached, the self-interest utilitarian processes overpowers the moral brake. (Everyone has price.)

How does this research apply to moral absolutism and moral relativism?

Here is my conclusion:

Morality is hard-wired in the brain.

Moral absolutism is allowing the hard-wired moral brake to stop the self-interest utilitarian decision making processes when it crosses the line.

Moral relativism is when self-interest is so strong that it overpowers the hard-wired moral brake.

Often, people say that moral relativism is not practical. However, when they say so, they are not defining impracticality as unachievable. They really mean that they are not willing to give up their self interest.

No comments: