Operational Definition, Legacy of Positivism

Lecture 18.  October 23 and 27.

Afterthoughts.  Positivists such as Comte believed that the basis of all truth-claims was direct observation of nature — positive knowledge.  We have seen that this caused a problem for science later in the 19th century when physicists and chemists started talking about things such as atoms and sub-atomic particles, and physicists such as Mach opposed their acceptance into science on positivist grounds.  They thought that all speculative beliefs, whether about God or angels or demons or even atoms, should be shunned as unverifiable and unscientific.  However, Mach and the strict positivists lost that debate.

Forethoughts.  But positivism adapted.  In the early 20th century a new form of positivism, Logical Positivism, arose in Vienna, along with psychoanalysis, the Bauhaus movement and other ideological fruits of modernity, and it dominated philosophy of science for 75 years.  Logical Positivism reconciled traditional positivism’s grounding of knowledge in observation with scientific use of terms referring to unobservable entities with a concept psychologists know as operational definition.  According to LP, a concept that seems to refer to something unobservable is legitimate in science if and only if the concept can be linked to something observable, typically a measurement or procedure of some kind, hence the phrase operational definition.  The term is defined by a scientific operation that can be observed.  Thus, for example, “mass” is a property of objects that cannot be seen, but can be operationally defined as weighing an object at sea level.  Or “electron” might be defined as a characteristic tracing on a photograph from a particle collider. 

Notice that there is a clever move here.  I wrote above about concepts that “seem to refer” to something observable.  Most scientists and ordinary people would think that “electron” refers to a particle too small to be seen, but LP denies this, because it, like traditional positivism, wants to exclude unobservable entities from science as dangerously metaphysical or religious.  According to LP, the meaning of a scientific term is exhausted by its operational definition — theoretical terms don’t refer to anything at all beyond the operation used to define them.

Operational definition was introduced to psychology in the 1930s by the psychophysicist S. S. Stevens, and it had an enormous influence on the field, an influence still felt today anytime a psychologist “operationalizes” a concept.  Operationism gave a huge boost to the redefining of psychology as the science of behavior rather than as the science of the mind.  Because consciousness is private, it cannot be observed by the scientific community, and cannot, therefore, produce positive knowledge.  However, behavior can be observed, and theoretical terms that allegedly refer to mind such as “drive,” “habit,” or “cognitive map,” could be redefined operationally as “hours of food deprivation.” “number of reinforced responses,” or “locating the food in a maze.”   Perhaps the most famous operational definition in psychology was given by E. G. Boring: “Intelligence is what the tests test.”  It’s important to note that according to LP there is no such thing as (or need be no such thing as) drive, habit, cognitive map, or intelligence, there are just the operational definitions of the terms.  Legitimating theoretical terms in science was, for LP, just a language trick.

However, despite psychologists continued devotion to operational definition, there’s really no such thing.  If you are interested in more on this topic, see Green, C. D. (1992).  Of Immortal Mythological Beasts: Operationism in Psychology.  Theory and Psychology, 2, 291-320.  Available at http://www.yorku.ca/christo.  Click on “Research and CV” tab, and then choose the highlighted title of the article.


Decision Making: Outcome or Process? Reason or Emotion?

Lecture 16.  October 14 and 20.

Afterthoughts.  In examining the moral question, we looked at two kinds of moral theories, Bentham’s consequentialist utilitarian theory and Kant’s deontological moral duty theory.  Note that the first evaluates the moral rightness of a decision by its outcome, whereas the latter evaluates moral rightness of a decision by the reasoning that led to the decision, no matter what the outcome.  Despite their differences, Bentham’s and Kant’s theories were (and are) part of the Enlightenment Project’s goal of grounding human life in reason, rather than in tradition or revelation (recall what Voltaire said about the aristocrat and the priest).  Bentham and Kant disagreed about what constituted proper moral decision making –a felicific calculus of pleasure and pain vs. formulating universally commanding categorical imperatives — but they agreed that genuinely moral actions must be grounded in reason.  Kant was especially clear that seemingly moral actions that flowed unreflectively out of a person’s character or animal instincts were not really moral at all, since no thought lay behind them.  

On the other hand, the leading Counter-Enlightenment thinker, Herder, argued that moral decisions were rooted in emotion, and the Scottish Commonsense philosophers held that moral intuitions were just that — immediate intuitions of right and wrong produced by a God-given moral sense and felt by us as sentiments of approval or disapproval.  Note that these emotion-based theories of moral action cross in a kind of 2 x 2 design with the concerns of Bentham and Kant.  It might be that our feelings of right and wrong have to do with happiness (a la Bentham) or might result from some larger moral concern for justice (a la Kant).

Forethoughts.  This yields quite a stew of ideas about how people make moral decisions for later psychologists to wrestle with.  The problem will become much more complex when the notion of unconscious mental processes gains favor in the nineteenth and twentieth centuries.   Let’s take Bentham’s theory as an example.  

Obviously, if someone sits down and weighs the happiness pros and cons of a decision, he or she is adhering to Bentham’s utilitarian precepts.  Charles Darwin, for example, will do this about marriage, not with regard to marrying a specific person, but with regard to whether or not to get married at at all.  Another person might decide to marry without giving the matter any conscious thought.  Is he or she being utilitarian?  On the surface, no, because the decision is reached without conscious consideration, and thus nonrationally.  But on the other hand, the person may have carried out the utilitarian calculus unconsciously, being conscious only of the outcome of the felicific calculus, not its process.  If the latter is the case, determining if the calculation was made, and the decision was, therefore, a rational one may be difficult.  We can see only the outcome, not the process.  Moreover, introducing the unconscious throws new light on the morality-as-thinking vs. morality-as-feeling argument.  It might be that seemingly irrational, emotionally-driven, actions are really rational after all, because the experienced feeling was the outcome of a non-conscious, but rational, calculating process.

These questions are extremely important today.  Recall Condorcet — the day will come when people will live only according to reason.  This was meant as a statement of liberation from blind tradition and ignorant faith, but it lays down a Kantian imperative: Everyone must live only according to reason, just as in former times they had to live according to tradition and God’s law.  Suppose, however, that people routinely make decisions that are not according to reason either consciously or unconsciously.  It’s not hard to set up experiments that put people in situations such as the Ultimatum Game (http://neuroeconomics.typepad.com/neuroeconomics/2003/09/what_is_the_ult.html) requiring a moral decision, and seeing if the outcome of the decision is in accord with normative theory.  If it’s not, then the decider must be either irrational (there’s no calculation going on, consciously or unconsciously) or incompetent (the calculations are attempted, but the obtained answer is wrong).  

Then, what if research along these lines consistently demonstrates that the vast majority of people make such “irrational” decisions all the time?  It would then appear that if people are supposed to live according to reason, they are incapable of doing so on their own, and others will have to do their thinking for them.  For example, in his book What’s the matter with Kansas, pundit Thomas Frank argues that many voters (Kansas is just an example) have been misled into voting against their own self-interest, i.e., irrationally, but appeals to emotionally charged cultural issues such as abortion, gay marriage, religion, and guns.  The psychologist Keith Stanovich, in The robot’s rebellion, concludes from research on thinking and decision making that most people are not rational, and calls for a great project of cognitive reform.  Like Kant, Stanovich believes that values, not just instrumental means-ends calculations must be chosen by reason.

A practical example of this approach can be found in the claim of some economists who think the recent sell-offs on Wall Street have been produced by a non-rational cognitive shortcut called the availability heuristic (see http://www.marginalrevolution.com/marginalrevolution/2008/10/where-is-the-cr.html).

An example of a moral conundrum that has been extensively researched is the Trolley Problem (http://en.wikipedia.org/wiki/Trolley_problem).  We will return to it later, because bringing evolution into the decision-making picture will complicate things still further.