Mental Testing and the Structure of Society

Lecture 20.  October 30 and November 3.

Afterthoughts.  We have looked at the small beginnings of mental testing in the work of Galton and Binet.  However, it would not be long before mental testing became involved in large scale projects to remake society in ways that would have been familiar to Plato: The building up of an intellectually meritocratic ruling elite.

Forethoughts.  The first use of intelligence tests to sort masses of people into ability groupings came in WW I.  Led by then President of the APA, Robert Yerkes (a distinguished comparative psychologist) psychologists developed the Army Alpha (for literate conscripts) and Beta (for illiterate conscripts) IQ tests.  These tests sorted men into A, B, C, D, and F categories.  The “A” men went to officer candidate school, “B” and “C” men into the regular army, and “D” and “F” men were washed out.  

This use of IQ testing to sort people by ability aided the development of Galton’s plans for what he dubbed eugenics, the controlled breeding of human beings along the lines of what was done with horses and agricultural products.  Galton hoped to use IQ tests to identify the brightest minds of each generation and encourage the smart to marry the smart with social recognition and government grants of money upon marriage.  This kind of eugenics is known as positive eugenics.  In the US in the early 20th century positive eugenics led to exhortations to “sow only fit seed” and holding Fitter Family contests at state fairs.  Negative eugenics involves attempting to prevent the “unfit” from having children, and in the same period IQ tests were often used as the basis for institutionalizing and/or sterilizing “unfit” men and women in the US and elsewhere, including most notoriously Nazi Germany, but several Scandinavian countries as well.  A useful resource for eugenics in the US is Also see D. Kevles, In the name of eugenics: Genetics and the uses of human heredity, Harvard University Press, 1998.

The Army tests also led to the SAT, the Big Test (N. Lemann, The Big Test: The secret history of the American meritocracy, Farrar, Straus, and Giroux, 1999) most Americans take and fear, because it does seem to sort people into certain life tracks, exactly as in Plato’s Republic.  James Bryan Conant (1893-1978), scientist and president of Harvard University, set out to destroy what he called the “Episcopacy” and the SAT was his weapon.  The “Episcopacy” were a group of families, largely Episcopalian, who for generations led American business and government.  Conant saw this as undemocratic, and wanted America to be led not by a quasi-aristocratic inherited elite but an elite of merit, of intelligence.  He inspired and guided the construction of the SAT to be a socially usable measure of intelligence that could be used by elite universities to choose their students by merit rather than by family tree.  The troublesome possibility exists, however, that merit and breeding may merge.  If possessors of high IQ are thrown together in their 20s, they will tend to marry one another, and if IQ is heritable, then their children will do well on SATs and go to elite schools, as will their grandchildren, great grandchildren and so on.

On the other hand, research on successful entrepreneurs (e.g., T. Stanley, The millionaire mind, Andrews McMeel, 2001) suggests that the qualities making for business success are more personal than intellectual.  I once had an email exchange with a journalist who wrote about this topic for the Washington Post.  She said that her favorite license plate was one on an expensive sports-car that read “LOW SATS.”

The fact remains, however, that psychology and its tools have a great deal of power in the modern world.

Published in: on November 16, 2008 at 8:31 pm  Leave a Comment  


Lecture 19.  October 28 and 29.

Afterthoughts.   In order to make a place for psychology at the table of science, psychology’s founder, Wilhelm Wundt, proclaimed an “alliance” between philosophical psychology and physiology, fulfilling an unbroken tradition reaching back to ancient Greece.  Claimed as an ally, physiology at the same time posed a danger to psychology, threatening its autonomy as a science by reducing mental concepts to neural facts and its existence as a discipline by revealing its subject matter—the soul, or its replacement, consciousness—to be an illusion.  Redefining psychology as the science of behavior dispensed with the ghost in the machine, but only postponed psychology’s reckoning with physiology.  While the biological substrates of behavior eluded early neuroscience, they had to be there and would one day be discovered. 

The aim and concepts of reductionism were developed by philosophers in the positivist tradition that founded philosophy of science as a discipline.  The early positivists, led by August Comte, suggested that sciences could be arranged in a historical and philosophical hierarchy reflecting their relative appearances in time and their relative philosophical statuses from last developing and least basic science (sociology) to first developing and most basic science (physics).  The idea was, very roughly (details were worked out later), that the laws of group behavior (sociology) would reduce to the more basic laws governing the behavior of humans comprising social groups (psychology), whose laws in turn would reduce to the more basic laws of biochemistry governing the nervous system of each person (neuroscience), which would reduce to the laws of chemistry, and thence to the laws of the particles making up each chemical element (physics).

The problem of reduction arises when a domain is addressed by two theories, raising the question of how such theories might relate to one another.  Historical examples include theories concerning the movements of the planets, heat, and the behavior of gases.

One possibility, of course, is replacement: One theory is correct and the other is wrong, and the first, typically newer, theory replaces the other.  A paradigm instance is the replacement of the Ptolomaic, earth-centered, account of the solar system by the Copernican, sun-centered, one.  In this case the conceptual furniture of the universe was left unchanged: Moon and Mars, Jupiter and Sol remained, but their positions and motions were understood and explained in new ways.  In other instances, replacement entails the complete elimination of things posited by older theories.  For example, as atomic understanding of matter and energy progressed in the 18th century, older concepts used to explain phenomena such as heat were found to be without reference: Elimination was the fate of phlogiston and caloric and of fluidic theories of heat and electromagnetism in general.

The second possibility is reduction:  A theory might turn out to be valid at one level of description and explanation, but reducible to a more basic and more general theory.  A paradigm instance is the relation between the classical gas laws and the atomic theory of matter.  Early physicists had shown that the behavior of gases could be predicted and explained by laws using the variables pressure, temperature, and volume.  So, for example, pressure cookers maintain a gas (steam) at a constant volume so that as the air and water vapor is heated the temperature in the cooker rises; on the other hand, heating air causes a hot-air balloon’s bladder to expand.  The gas laws were mathematically precise and descriptively true.  As the atomic theory of matter developed, however, heat came to be understood as the rapidity of molecular movement in a physical body: the more rapid the motion, the higher the temperature.  Applied to gases, atomic theory explained why the gas laws were true.  In the pressure cooker, the atoms of water vapor trapped inside move faster and faster as heat is applied, and so temperature rises; in the hot-air balloon, the molecules of heated air push against the enclosing bladder, forcing it to open more, and volume increases. 

In a reduction, the reduced theory is retained in science, but is explained at a lower level of discourse (atoms rather than gasses) and is incorporated into a broader, more general, account of nature (gasses are seen to follow the same principles and are made of the same stuff as all matter without exception).  This example shows that when psychologists discuss and fear “reductionism,” they usually are discussing and fearing replacement instead.  Note, also, that replaced theories, even though known to be false, may be retained for practical use.  Calculating one’s location on the earth under Ptolomaic assumptions is much easier than under Copernican ones, and for centuries after Copernicus’ Revolution of the heavenly orbs sailors sailed the seas of a notionally earth-centered universe.

Forethoughts.  Early psychologists flirted with reductionism, but most moved away from it.  Wundt’s alliance with physiology weakened during his career.  In his early writings, he often proposed physiological accounts of mental processes such as attention, but in the end the alliance became more a matter of experimental method than of theoretical substance.  Freud was besotted with the prospect of reduction in his “Project for a scientific psychology,” but never published it, although its ghostly echoes remain in his later so-called “pure psychology.”  Behaviorists were similarly ambivalent in their relationships with physiology.  John Watson, who launced the behaviorist movement, was a materialist and sometimes talked like a reductionist and eliminativist, but it was more bluster and attitude than a real attempt to do psychology as physiology.  His student Karl Lashley, did try to carry out a reductionist program with respect to learning, but it never came to anything, probably because the research tools needed lay decades in the future.

In the later 20th century, cognitive psychologists and allied philosophers of mind declared their independence from physiology and denied that cognitive theories could be reduced to or eliminated by neuroscience.  Their most formidable argument derived from the symbol-system version of cognitive psychology, and is known as the argument from multiple realizability.  In brief, the argument is this.  In the symbol system view, cognitive processes consist in the manipulation of symbols by logical rules.  Symbol manipulation can be performed equally well by different physical devices, most notably organic brains made of tissue and electronic brains made of silicon and metal; hence, the familiar metaphor that the mind is like a computer, or, more precisely, that mind is to brain as program is to computer.  Cognitive theorizing, whether in psychology or artificial intelligence, was about formally defined symbols and rules; how symbols and rules were grounded in a brain or a computer was “mere implementation.”  At the margin, this meant that a person’s mind could, in principle, be written as a computer program and downloaded into a computer, with no resulting change in behavior. 

Important to the argument was the distinction between types and tokens.  Each person is a token of the (conceptual) type “human being;” each dime in your pocket is a token of the type “dime.”  The beauty of multiple realizability—known in philosophy as non-reductive physicalism—was that it was materialist—no soul-stuff need be invoked—yet it preserved the theoretical autonomy of psychology.  Every mental event, or token (in the sense of a piece of cognitive computation), corresponded to some physiological or electronic token, but no mental type shared across cognizers, organic or inorganic (e.g., knowing that a dime is a unit of US currency) corresponded to any physical type in the system implementing a cognitive system.  The idea is perhaps clearest in the case of computer programs.  One can play a game such as Command and ConquerÔ on a PC, an Xbox, or a Mac, and it will look and feel the same even though the underlying machine code is different in each device.  Reduction is therefore only trivially true and poses no threat to psychology.  Brains and machines carry out computations, but no theoretical gain is won by worrying about how they do so.  Description, prediction, and control, the scientific goals of theorizing, can be fully met at the cognitive level.

Nevertheless, reductionist and eliminativist proposals have been revived in the 21st century, as neuroscience has made enormous advances in understanding the physiological mechanics of mental processes. Clinical psychologists are struggling to survive in the age of Prozac.  Even economics, which would seem immune to reductionism because it deals with social entities such as money and interest rates, has within it a new approach called neuroeconomics.

Operational Definition, Legacy of Positivism

Lecture 18.  October 23 and 27.

Afterthoughts.  Positivists such as Comte believed that the basis of all truth-claims was direct observation of nature — positive knowledge.  We have seen that this caused a problem for science later in the 19th century when physicists and chemists started talking about things such as atoms and sub-atomic particles, and physicists such as Mach opposed their acceptance into science on positivist grounds.  They thought that all speculative beliefs, whether about God or angels or demons or even atoms, should be shunned as unverifiable and unscientific.  However, Mach and the strict positivists lost that debate.

Forethoughts.  But positivism adapted.  In the early 20th century a new form of positivism, Logical Positivism, arose in Vienna, along with psychoanalysis, the Bauhaus movement and other ideological fruits of modernity, and it dominated philosophy of science for 75 years.  Logical Positivism reconciled traditional positivism’s grounding of knowledge in observation with scientific use of terms referring to unobservable entities with a concept psychologists know as operational definition.  According to LP, a concept that seems to refer to something unobservable is legitimate in science if and only if the concept can be linked to something observable, typically a measurement or procedure of some kind, hence the phrase operational definition.  The term is defined by a scientific operation that can be observed.  Thus, for example, “mass” is a property of objects that cannot be seen, but can be operationally defined as weighing an object at sea level.  Or “electron” might be defined as a characteristic tracing on a photograph from a particle collider. 

Notice that there is a clever move here.  I wrote above about concepts that “seem to refer” to something observable.  Most scientists and ordinary people would think that “electron” refers to a particle too small to be seen, but LP denies this, because it, like traditional positivism, wants to exclude unobservable entities from science as dangerously metaphysical or religious.  According to LP, the meaning of a scientific term is exhausted by its operational definition — theoretical terms don’t refer to anything at all beyond the operation used to define them.

Operational definition was introduced to psychology in the 1930s by the psychophysicist S. S. Stevens, and it had an enormous influence on the field, an influence still felt today anytime a psychologist “operationalizes” a concept.  Operationism gave a huge boost to the redefining of psychology as the science of behavior rather than as the science of the mind.  Because consciousness is private, it cannot be observed by the scientific community, and cannot, therefore, produce positive knowledge.  However, behavior can be observed, and theoretical terms that allegedly refer to mind such as “drive,” “habit,” or “cognitive map,” could be redefined operationally as “hours of food deprivation.” “number of reinforced responses,” or “locating the food in a maze.”   Perhaps the most famous operational definition in psychology was given by E. G. Boring: “Intelligence is what the tests test.”  It’s important to note that according to LP there is no such thing as (or need be no such thing as) drive, habit, cognitive map, or intelligence, there are just the operational definitions of the terms.  Legitimating theoretical terms in science was, for LP, just a language trick.

However, despite psychologists continued devotion to operational definition, there’s really no such thing.  If you are interested in more on this topic, see Green, C. D. (1992).  Of Immortal Mythological Beasts: Operationism in Psychology.  Theory and Psychology, 2, 291-320.  Available at  Click on “Research and CV” tab, and then choose the highlighted title of the article.

The Authority of Science, the Boulder Model, and Clinical Psychology

Lecture 17.  October 21 and 22.

Afterthoughts. Some questions and remarks after class suggested I should clarify what I meant about the peculiarity of the Boulder Model scientist-practitioner model of training in clinical psychology.

An important feature of modernism is the introduction of rationality and science as conferring social authority.  Authority is an important concept — it confers legitimacy on a person’s or institution’s influence on others.  It is much more than mere power.  For example, a physician can write a prescription for you, but he or she cannot force you to take it (and the use of force on inmates in psychiatric facilities has been the subject of much controversy, lawmaking, and litigation on precisely this point).  Prior to the rise of science, the most important sources of authority were religion and tradition, the authority of the priest and the aristocrat targeted for extinction by Voltaire.  But (see Condorcet) the Enlightenment introduced a new, potentially highest, authority, reason, and the institution that embodies this authority above all is science.  As Dr. Wenkman says, “Back off man, I’m a scientist!”

But what gives science authority?  One is first tempted to answer, knowledge: Workable, valid, knowledge about how the world works.  So you trust the doctor because he or she knows more about the causes and cure of diseases than you do.  But we must think more deeply.  We trust the knowledge of science because of how it was obtained — rationally, through scientific research.  Scientists go to a great deal of trouble to ensure that their conclusions are reached through rational procedures.  That is why, for example, articles go through peer-review and instances of fraud evoke such horror among scientists.  Journals don’t just publish every article that comes in the mail, and scientists who commit fraud are drummed out of the scientific community.  Science is a collection of practices that happens to produce knowledge, not just an accumulated collection of facts.  Scientific authority is rooted in its practices, not the body of ideas currently found in texts.  Ideas may be wrong, and are replaced by new ones, but the practices of science remain to continue to weed out false ideas and create better ones.

After World War II, psychology saw the opportunity to create a new profession, that of clinical psychologist practicing psychotherapy, previously the exclusive bailiwick of psychiatrists.  Let’s go back to the physician, remembering that psychiatrists are physicians.  Physicians have ample biological knowledge, and it is in that knowledge that their claim to authority lies.  However, the typical physician is not trained as a scientist, in the practices of scientific research, and has probably not carried out any original research.  The physician is a practitioner of a craft, medicine, not a research scientist.  Thus the physician’s authority is second-hand, rooted not in the rational practice of science but only in the study of the fruits of that practice.  

If clinical psychologists had been trained as physicians were, they would have no more authority than that of psychiatrists, and indeed would have less, as they would have no training in medicine.  Moreover, psychiatry was an already existing, high-prestige, profession.  One way to increase the authority of clinical psychologists, then, was to make them scientists, producers of knowledge, not just users of knowledge.  Their training as PhDs places them one step closer to the rationality of science than that of MDs, and thus they can say what an MD cannot, “Back off, man, I’m a scientist!”

Forethoughts. Other questions concerned careers in clinical psychology.  Clinical psychology faces serious challenges today on 3 fronts.  First, there is managed care, which seeks to reign in medical costs, and has subjected psychotherapy, whose outcomes are hard to test and often of marginal effect size, to especially stringent controls.  In connection with this, second, there is the rise of licensed clinical social workers (and to some degree PsyD holders), who also performs psychotherapy, but whose training is briefer and who can be produced in much larger quantities than PhD clinical psychologists (just compare the graduating class sizes of VCUs School of Social Work with our Department’s Clinical Program).  Third, there is the ongoing biological revolution in psychiatry, because of which it’s possible to treat mental disorders with medications only an MD can prescribe.  Simply put, the market for PhD clinical psychologists has shrunk over the past few decades and is likely to shrink farther.  The APA is trying to cope with all these changes (e.g., by working to get clinical psychologists prescription privileges), but the glory days of clinical psychology practice are probably over.