Decision Making: Outcome or Process? Reason or Emotion?

Lecture 16.  October 14 and 20.

Afterthoughts.  In examining the moral question, we looked at two kinds of moral theories, Bentham’s consequentialist utilitarian theory and Kant’s deontological moral duty theory.  Note that the first evaluates the moral rightness of a decision by its outcome, whereas the latter evaluates moral rightness of a decision by the reasoning that led to the decision, no matter what the outcome.  Despite their differences, Bentham’s and Kant’s theories were (and are) part of the Enlightenment Project’s goal of grounding human life in reason, rather than in tradition or revelation (recall what Voltaire said about the aristocrat and the priest).  Bentham and Kant disagreed about what constituted proper moral decision making –a felicific calculus of pleasure and pain vs. formulating universally commanding categorical imperatives — but they agreed that genuinely moral actions must be grounded in reason.  Kant was especially clear that seemingly moral actions that flowed unreflectively out of a person’s character or animal instincts were not really moral at all, since no thought lay behind them.  

On the other hand, the leading Counter-Enlightenment thinker, Herder, argued that moral decisions were rooted in emotion, and the Scottish Commonsense philosophers held that moral intuitions were just that — immediate intuitions of right and wrong produced by a God-given moral sense and felt by us as sentiments of approval or disapproval.  Note that these emotion-based theories of moral action cross in a kind of 2 x 2 design with the concerns of Bentham and Kant.  It might be that our feelings of right and wrong have to do with happiness (a la Bentham) or might result from some larger moral concern for justice (a la Kant).

Forethoughts.  This yields quite a stew of ideas about how people make moral decisions for later psychologists to wrestle with.  The problem will become much more complex when the notion of unconscious mental processes gains favor in the nineteenth and twentieth centuries.   Let’s take Bentham’s theory as an example.  

Obviously, if someone sits down and weighs the happiness pros and cons of a decision, he or she is adhering to Bentham’s utilitarian precepts.  Charles Darwin, for example, will do this about marriage, not with regard to marrying a specific person, but with regard to whether or not to get married at at all.  Another person might decide to marry without giving the matter any conscious thought.  Is he or she being utilitarian?  On the surface, no, because the decision is reached without conscious consideration, and thus nonrationally.  But on the other hand, the person may have carried out the utilitarian calculus unconsciously, being conscious only of the outcome of the felicific calculus, not its process.  If the latter is the case, determining if the calculation was made, and the decision was, therefore, a rational one may be difficult.  We can see only the outcome, not the process.  Moreover, introducing the unconscious throws new light on the morality-as-thinking vs. morality-as-feeling argument.  It might be that seemingly irrational, emotionally-driven, actions are really rational after all, because the experienced feeling was the outcome of a non-conscious, but rational, calculating process.

These questions are extremely important today.  Recall Condorcet — the day will come when people will live only according to reason.  This was meant as a statement of liberation from blind tradition and ignorant faith, but it lays down a Kantian imperative: Everyone must live only according to reason, just as in former times they had to live according to tradition and God’s law.  Suppose, however, that people routinely make decisions that are not according to reason either consciously or unconsciously.  It’s not hard to set up experiments that put people in situations such as the Ultimatum Game (http://neuroeconomics.typepad.com/neuroeconomics/2003/09/what_is_the_ult.html) requiring a moral decision, and seeing if the outcome of the decision is in accord with normative theory.  If it’s not, then the decider must be either irrational (there’s no calculation going on, consciously or unconsciously) or incompetent (the calculations are attempted, but the obtained answer is wrong).  

Then, what if research along these lines consistently demonstrates that the vast majority of people make such “irrational” decisions all the time?  It would then appear that if people are supposed to live according to reason, they are incapable of doing so on their own, and others will have to do their thinking for them.  For example, in his book What’s the matter with Kansas, pundit Thomas Frank argues that many voters (Kansas is just an example) have been misled into voting against their own self-interest, i.e., irrationally, but appeals to emotionally charged cultural issues such as abortion, gay marriage, religion, and guns.  The psychologist Keith Stanovich, in The robot’s rebellion, concludes from research on thinking and decision making that most people are not rational, and calls for a great project of cognitive reform.  Like Kant, Stanovich believes that values, not just instrumental means-ends calculations must be chosen by reason.

A practical example of this approach can be found in the claim of some economists who think the recent sell-offs on Wall Street have been produced by a non-rational cognitive shortcut called the availability heuristic (see http://www.marginalrevolution.com/marginalrevolution/2008/10/where-is-the-cr.html).

An example of a moral conundrum that has been extensively researched is the Trolley Problem (http://en.wikipedia.org/wiki/Trolley_problem).  We will return to it later, because bringing evolution into the decision-making picture will complicate things still further.