The Bell Curve is a very sober, very thorough, and very honest book (1.) - on a subject where sobriety, thoroughness, and honesty are only likely to provoke cries of outrage. Its authors, Charles Murray and the late Professor Richard J. Herrnstein of Harvard, must have known that writing about differences in intelligence would provoke shrill denunciations from some quarters. But they may not have expected quite so many, quite so loudly or venomously, and from such a wide spectrum of people who should know better.
The great danger in this emotional atmosphere is that there will develop a two-tiered set of reactions--violent public outcries against the message of The Bell Curve by some, and uncritical private acceptance of it by many others, who hear no rational arguments being used against it. Both reactions are unwarranted, but not unprecedented, in the over-heated environment surrounding so many touchy social issues today.
The predictive validity and social implications of intelligence test results are carefully explored by Herrnstein and Murray in more than 500 pages of text with another 300 pages of appendices, footnotes, and an index. The Bell Curve is an education on the whole subject, including the evidence pro and con on a wide variety of controversial issues. Even where the authors clearly come down on one side of a given issue, they usually present the case for believing otherwise. In such candor, as well as in the clarity with which technical issues are discussed without needless jargon, this book is a model that others might well emulate.
Contrary to much hysteria in the media, this is not a book about race, nor is it trying to prove that blacks are capable only of being hewers of wood and drawers of water. The first 12 chapters of the book deal solely with data from all-white samples, so as to be rid of the distracting issue of racial differences in IQ scores. In these chapters, Herrnstein and Murray establish their basic case that intelligence test scores are highly correlated with important social phenomena from academic success to infant mortality, which is far higher among babies whose mothers are in the bottom quarter of the IQ distribution.
Empirical data from a wide variety of sources establish that even the differing educational backgrounds or socioeconomic levels of families in which individuals were raised are not as good predictors of future income, academic success, job performance ratings, or even divorce rates, as IQ scores are. It is not that IQ results are infallible, or even that correlations between IQ and these other social phenomena are high. Rather, the correlations simply tend to be higher than correlations involving other factors that might seem more relevant on the surface. Even in non-intellectual occupations, pen-and-paper tests of general mental ability produce higher correlations with future job performance than do "practical" tests of the particular skills involved in those jobs.
In such a comprehensive study of IQ scores and their social implications, there is no way to leave out questions of intergroup differences in IQ without the absence of such a discussion being glaring evidence of moral cowardice. After ignoring this issue for the first 12 chapters, Herrnstein and Murray enter into a discussion of it in Chapter 13 ("Ethnic Differences in Cognitive Ability"), not as zealots making a case but as researchers laying out the issues and reaching the conclusions that seem to them most consistent with the facts--while also presenting alternative explanations. They seem to conclude, however tentatively, that the apparent influence of biological inheritance on IQ score differences among members of the general society may also explain IQ differences between different racial and ethnic groups.
This is what set off the name-calling and mud-slinging with which so many critics of The Bell Curve have responded. Such responses, especially among black intellectuals and "leaders," are only likely to provoke others to conclude that they protesteth too much, lending more credence to the conclusion that genetics determines intelligence. Such a conclusion goes beyond what Herrnstein and Murray say, and much beyond what the facts will support.
First of all, Herrnstein and Murray make a clear distinction between saying that IQ is genetically inheritable among individuals in general and saying that the differences among particular groups are due to different genetic inheritances. They say further that the whole issue is "still riddled with more questions than answers." They caution against "taking the current ethnic differences as etched in stone." But none of this saves them from the wrath of those who promote the more "politically correct" view that the tests are culturally biased and lack predictive validity for non-white minorities.
It is an anomaly that there exists a controversy over the predictive validity of tests. This is ultimately an empirical question, one for which there is a vast amount of data going back many years. Herrnstein and Murray are only summarizing these data when they shoot down the arguments and evasions by which the conventional wisdom says that these tests do not accurately predict future performance. Long before The Bell Curve was published, the empirical literature showed repeatedly that IQ and other mental tests do not predict a lower subsequent performance for minorities than the performance that in fact emerges. In terms of logic and evidence, the predictive validity of mental tests is the issue least open to debate. On this question, Murray and Herrnstein are most clearly and completely correct.
In thus demolishing the foundation underlying such practices as double-standards in college admissions and "race-norming" of employment tests, The Bell Curve threatens both a whole generation of social policies and the careers of those who promote them. To those committed to such policies, this may be at least as bad as the authors remaining "agnostic" (as Herrnstein and Murray put it) on the question as to whether black- white IQ differences are genetic in origin.
On some other issues, however, the arguments and conclusions of The Bell Curve are much more open to dispute. Yet critics have largely overlooked these disputable points, while concentrating their attacks on either the unassailable conclusions of the book or the presumed bad intentions of the authors.
While Herrnstein and Murray do an excellent job of exposing the flaws in the argument that tests are culturally biased by showing that the greatest black-white differences are not on the questions which presuppose middle-class vocabulary or experiences, but on abstract questions such as spatial perceptual ability, their conclusion that this "phenomenon seems peculiarly concentrated in comparisons of ethnic groups" is simply wrong.
When European immigrant groups in the United States scored below the national average on mental tests, they scored lowest on the abstract parts of those tests. So did white mountaineer children in the United States tested back in the early 1930s. So did canal boat children in Britain, and so did rural British children compared to their urban counterparts, at a time before Britain had any significant non-white population. So did Gaelic-speaking children as compared to English-speaking children in the Hebrides Islands. This is neither a racial nor an ethnic peculiarity. It is a characteristic found among low-scoring groups of European as well as African ancestry.
In short, groups outside the cultural mainstream of contemporary Western society tend to do their worst on abstract questions, whatever their race might be. But to call this cultural "bias" is misleading, particularly if it suggests that these groups' "real" ability will produce better results than their test scores would indicate. That non sequitur was destroyed empirically long before Herrnstein and Murray sat down to write The Bell Curve. Whatever innate potential various groups may have, what they actually do will be done within some particular culture. That intractable reality cannot be circumvented by devising "culture-free" tests, for such tests would also be purpose-free in a world where there is no culture-free society.
Perhaps the strongest evidence against a genetic basis for intergroup differences in IQ is that the average level of mental test performance has changed very significantly for whole populations over time and, moreover, particular ethnic groups within the population have changed their relative positions during a period when there was very little intermarriage to change the genetic makeup of these groups.
While The Bell Curve cites the work of James R. Flynn, who found substantial increases in mental test performances from one generation to the next in a number of countries around the world, the authors seem not to acknowledge the devastating implications of that finding for the genetic theory of intergroup differences, or for their own reiteration of long-standing claims that the higher fertility of low-IQ groups implies a declining national IQ level. This latter claim is indeed logically consistent with the assumption that genetics is a major factor in interracial differences in IQ scores. But ultimately this too is an empirical issue--and empirical evidence has likewise refuted the claim that IQ test performance would decline over time.
Even before Professor Flynn's studies, mental test results from American soldiers tested in World War II showed that their performances on these tests were higher than the performances of American soldiers in World War I by the equivalent of about 12 IQ points. Perhaps the most dramatic changes were those in the mental test performances of Jews in the United States. The results of World War I mental tests conducted among American soldiers born in Russia--the great majority of whom were Jews--showed such low scores as to cause Carl Brigham, creator of the Scholastic Aptitude Test, to declare that these results "disprove the popular belief that the Jew is highly intelligent." Within a decade, however, Jews in the United States were scoring above the national average on mental tests, and the data in The Bell Curve indicate that they are now far above the national average in IQ.
Strangely, Herrnstein and Murray refer to "folklore" that "Jews and other immigrant groups were thought to be below average in intelligence. " It was neither folklore nor anything as subjective as thoughts. It was based on hard data, as hard as any data in The Bell Curve. These groups repeatedly tested below average on the mental tests of the World War I era, both in the army and in civilian life. For Jews, it is clear that later tests showed radically different results--during an era when there was very little intermarriage to change the genetic makeup of American Jews.
My own research of twenty years ago showed that the IQs of both Italian-Americans and Polish-Americans also rose substantially over a period of decades. Unfortunately, there are many statistical problems with these particular data, growing out of the conditions under which they were collected. However, while my data could never be used to compare the IQs of Polish and Italian children, whose IQ scores came from different schools, nevertheless the close similarity of their general patterns of IQ scores rising over time seems indicative--especially since it follows the rising patterns found among Jews and among American soldiers in general between the two world wars, as well as rising IQ scores in other countries around the world.
The implications of such rising patterns of mental test performance is devastating to the central hypothesis of those who have long expressed the same fear as Herrnstein and Murray, that the greater fertility of low-IQ groups would lower the national (and international) IQ over time. The logic of their argument seems so clear and compelling that the opposite empirical result should be considered a refutation of the assumptions behind that logic.
One of the reasons why widespread improvements in results on IQ tests have received such little attention is that these tests have been normed to produce an average IQ of 100, regardless of how many questions are answered correctly. Like "race-norming" today, such generation-norming, as it were, produces a wholly fictitious equality concealing very real and very consequential differences. If a man who scores 100 on an IQ test today is answering more questions correctly than his grandfather with the same IQ answered two-generations ago, then someone else who answers the same number of questions correctly today as this man's grandfather answered two generations ago may have an IQ of 85.
Herrnstein and Murray openly acknowledge such rises in IQ and christen them "the Flynn effect," in honor of Professor Flynn who discovered it. But they seem not to see how crucially it undermines the case for a genetic explanation of interracial IQ differences. They say:
The national averages have in fact changed by amounts that are comparable to the fifteen or so IQ points separating blacks and whites in America. To put it another way, on the average, whites today differ from whites, say, two generations ago as much as whites today differ from blacks today. Given their size and speed, the shifts in time necessarily have been due more to changes in the environment than to changes in the genes.
While this open presentation of evidence against the genetic basis of interracial IQ differences is admirable, the failure to draw the logical inference seems puzzling. Blacks today are just as racially different from whites of two generations ago as they are from whites today. Yet the data suggest that the number of questions that blacks answer correctly on IQ tests today is very similar to the number answered correctly by past generations of whites. If race A differs from race B in IQ, and two generations of race A differ from each other by the same amount, where is the logic in suggesting that the IQ differences are even partly racial?
Herrnstein and Murray do not address this question, but instead shift to a discussion of public policy:
Couldn't the mean of blacks move 15 points as well through environmental changes? There seems no reason why not--but also no reason to believe that white and Asian means can be made to stand still while the Flynn effect works its magic.
But the issue is not solely one of either predicting or controlling the future. It is a question of the validity of the conclusion that differences between genetically different groups are due to those genetic differences, whether in whole or in part. When any factor differs as much from Al to A2 as it does from A2 to B2, why should one conclude that this factor is due to the difference between A in general and B in general? That possibility is not precluded by the evidence, but neither does the evidence point in that direction.(2.)
A remarkable phenomenon commented on in the Moynihan report of thirty years ago goes unnoticed in The Bell Curve--the prevalence of females among blacks who score high on mental tests. Others who have done studies of high- IQ blacks have found several times as many females as males above the 120 IQ level. Since black males and black females have the same genetic inheritance, this substantial disparity must have some other roots, especially since it is not found in studies of high-IQ individuals in the general society, such as the famous Terman studies, which followed high-IQ children into adulthood and later life. If IQ differences of this magnitude can occur with no genetic difference at all, then it is more than mere speculation to say that some unusual environmental effects must be at work among blacks. However, these environmental effects need not be limited to blacks, for other low-IQ groups of European or other ancestries have likewise tended to have females over-represented among their higher scorers, even though the Terman studies of the general population found no such patterns.
One possibility is that females are more resistant to bad environmental conditions, as some other studies suggest. In any event, large sexual disparities in high-IQ individuals where there are no genetic or socioeconomic differences present a challenge to both the Herrnstein- Murray thesis and most of their critics.
Black males and black females are not the only groups to have significant IQ differences without any genetic differences. Identical twins with significantly different birthweights also have IQ differences, with the heavier twin averaging nearly 9 points higher IQ than the lighter one.(3.) This effect is not found where the lighter twin weighs at least six and a half pounds, suggesting that deprivation of nutrition must reach some threshold level before it has a permanent effect on the brain during its crucial early development.
Perhaps the most intellectually troubling aspect of The Bell Curve is the authors' uncritical approach to statistical correlations. One of the first things taught in introductory statistics is that correlation is not causation. It is also one of the first things forgotten, and one of the most widely ignored facts in public policy research. The statistical term "multicollinearity," dealing with spurious correlations, appears only once in this massive book.
Multicollinearity refers to the fact that many variables are highly correlated with one another, so that it is very easy to believe that a certain result comes from variable A, when in fact it is due to variable Z, with which A happens to be correlated. In real life, innumerable factors go together. An example I liked to use in class when teaching economics involved a study showing that economists with only a bachelor's degree had higher incomes than economists with a master's degree and that these in turn had higher incomes than economists with Ph.D.'s. The implication that more education in economics leads to lower incomes would lead me to speculate as to how much money it was costing a student just to be enrolled in my course. In this case, when other variables were taken into account, these spurious correlations disappeared.(4.) In many other cases, however, variables such as cultural influences cannot even be quantified, much less have their effects tested statistically.
The Bell Curve is really three books in one. It is a study of the general effects of IQ levels on the behavior and performance of people in general in a wide range of endeavors. Here it is on its most solid ground. It is also an attempt to understand the causes and social implications of IQ differences among ethnic groups. Here it is much more successful in analyzing the social implications where, as the authors say, "it matters little whether the genes are involved at all." Finally, it is a statement of grave concerns for the future of American society and a set of proposals as to how public policy should proceed in matters of education and social welfare programs. These concerns need voicing, even if they are not always compelling. One chance in five of disaster is not to be ignored. That is, after all, greater than the chance of disaster in playing Russian roulette.
In one sense, the issues are too important to ignore. In another sense the differences between what Herrnstein and Murray said and what others believe is much smaller than the latter seem to think. The notion that "genes are destiny" is one found among some of the more shrill critics, but not in The Bell Curve itself. Nor is race a kind of intellectual glass ceiling for individuals. As the authors write:
It should be no surprise to see (as one does every day) blacks functioning at high levels in every intellectually challenging field.
Critics who insist on arguing that we are talking about an intellectual glass ceiling should recognize that this is their own straw man, not something from The Bell Curve. And if they refuse to recognize this, then we should recognize these critics as demagogues in the business of scavenging for grievances. The Bell Curve deserves critical attention, not public smearing or uncritical private acceptance.
(1.) Editor's note: Our own review of The Bell Curve, by Christopher Caldwell, appeared last month.
(2.) It is widely acknowledged that height is heavily influenced by genes, and it is not controversial that races differ in height because of these genetic differences. Yet predictions of a decline in national height over time, because of a greater fertility in groups of shorter stature, were likewise confounded by an increase in the national height. Yet, rightly, no one regards this as a refutation of the belief that height is greatly influenced by genes and differs from race to race for genetic reasons. Similarly, rising IQs over time do not refute the belief that races differ in IQ for genetic reasons, though it ought to at least raise a question about that belief. The parallel breaks down when we realize that height can be measured directly, as innate potential cannot be, but is wholly dependent on inferences about what would have happened in the absence of environmental differences.
(3.) Miles D. Storfer, Intelligence and Giftedness: The Contributions of Heredity and Early Environment (San Francisco: Jossey-Bass Publishers, 1990), p. 13.
(4.) Because a postgraduate degree was usually required to be an
economist, those economists with only a bachelor's degree tended to
have entered the profession before this requirement became
common. That is, they tended to be older and have more experience,
with this experience being more likely to have been in the more
lucrative business world rather than in academia.
Commentary
August 1998
IQ Since "The Bell Curve"
Christopher F. Chabris
THIS PAST January, Governor Zell Miller of Georgia asked his legislature for enough money to give a cassette or CD of classical music to every newborn child in the state. The governor cited scientific evidence to support this unusual budget request. "There's even a study," he declared in his State of the State address, "that showed that after college students listened to a Mozart piano sonata for ten minutes, their IQ scores increased by nine points." And he added: "Some argue that it didn't last, but no one doubts that listening to music, especially at a very early age, affects the spatial-temporal reasoning that underlies math, engineering, and chess."
The so-called "Mozart effect" is one of the most publicized recent examples of our ongoing preoccupation with intelligence, a subject that not only refuses to go away but continues to raise whirlwinds of controversy. The largest such controversy, of course, surrounds The Bell Curve (1994), by the late Richard J. Herrnstein and Charles Murray. A mountain of essays and books purporting to refute that work and its conclusions grows and grows to this day. But now we also have the magnum opus of Arthur Jensen,1 a leading figure in IQ research and, like Herrnstein and Murray, a favorite target of academic liberals, as well as a posthumous volume by another leading IQ researcher, Hans Eysenck.2 So it is a good moment to look again at what we know, what we do not know, and what we think we know about this vexed subject.
IN The Bell Curve, Herrnstein and Murray set out to prove that American society was becoming increasingly meritocratic, in the sense that wealth and other positive social outcomes were being distributed more and more according to people's intelligence and less and less according to their social backgrounds. Furthermore, to the extent that intelligence was not subject to easy environmental control, but was instead difficult to modify and even in part inherited, genetic differences among individuals, Herrnstein and Murray posited, would contribute significantly to their futures.
The evidence for this thesis came largely from an analysis of data compiled in the National Longitudinal Study of Youth (NLSY), an ongoing federal project that tested over 10,000 Americans in 1980, with follow-up interviews regularly thereafter. Each participant completed the Armed Forces Qualifying Test (AFQT)--which, like any diverse test of mental ability, can be used as a measure of intelligence--and was then evaluated for subsequent social outcomes (including high-school graduation, level of income, likelihood of being in jail, likelihood of getting divorced, and so forth). As a rule, a person's intelligence turned out to predict such outcomes more strongly than did the socio economic status of his parents. This relationship held for all ethnic groups; indeed, when intelligence was statistically controlled, many "outcome" differences among ethnic groups vanished.
Herrnstein, a professor of psychology at Harvard with an impeccable reputation for scientific integrity, died of cancer just a week before The Bell Curve arrived in bookstores. This in itself may have had something to do with the frenzy of the public response. Had Herrnstein lived to participate in the debate, critics might have found the book harder to malign than it became when Murray, whose training was not in psychology but in sociology, was left to promote and defend it by himself.
Not that Murray, the author of Losing Ground (1984) and a vocal critic of the liberal welfare state, failed to do so energetically. But his lack of credentials as a hard scientist, and his overabundant credentials as a scourge of liberalism, made him a tempting target for an attack that was itself motivated as much by political as by scientific differences, and that was almost entirely focused on a side-issue in the book. That side-issue was differences in intelligence not among individuals but among groups--and specifically between whites and blacks--the degree to which those differences might or might not be explained genetically. So heated, and so partisan, was the furor at its peak that even President Clinton was asked about the book at a press conference. (He had not read it, but disagreed with it nonetheless.)
But the overreaction to what was in essence a moderate and closely reasoned book would also not have surprised Herrnstein in the least. If anything, it was a replay--actually, a more civilized replay--of what had happened to him after he published his first article on intelligence in the Atlantic in 1971. That article, entitled "IQ," besides bringing to public attention several points raised by Arthur Jensen in a 1969 paper in the Harvard Educational Review, offered a more speculative version of the argument that would be fleshed out and documented with NLSY data in The Bell Curve 23 years later.
Just as with The Bell Curve, only a small portion of Herrnstein's 1971 article dealt with differences among groups, and only a portion of that portion dealt with possible genetic influences on those differences; and, just as with The Bell Curve, these were the passages that received the greatest attention. In his article, Herrnstein concluded that "although there are scraps of evidence for a genetic component in the black-white difference, the overwhelming case is for believing that American blacks have been at an environmental disadvantage" (emphasis added). This did not stop one Nathan Hare from writing in response that "one would think that the pseudo-scientific generalizations surrounding race and IQ had long been put to rest. But the ghoulish die hard." Nor did it keep students at Harvard and elsewhere from putting up posters accusing Herrnstein of racism and calling him "pigeon-man" (in reference to his animal-learning research). His lectures were filled with protesters, and his speeches at other universities were canceled, held under police guard, or aborted with last-second, back-door escapes into unmarked vehicles. Death threats were made.
PEOPLE OFTEN react most defensively when challenged not on their firmly held beliefs but on beliefs they wish were true but suspect at some level to be false. This is the psychology behind the controversy that ensued after "IQ" in 1971 and The Bell Curve in 1994.3 On each occasion intemperate articles were written (some by the same people, barely updated), and the most strident positions were taken by those least qualified to comment on the science.4
By now, five major books have been published in direct response to The Bell Curve. Two of them, though critical, are within the bounds of reasonable discourse. Thus, Intelligence, Genes, and Success (1997), edited by four professors from the University of Pittsburgh who seem opposed to the book's public-policy conclusions, offers a fairly balanced range of scholarly views. On the sensitive question of heritability, what is especially notable is that the argument takes place mainly at the margins; although some of the book's contributors contend that the heritability of intelligence falls within a range lower than the 40-80 percent given by Herrnstein and Murray, that range is in every case much greater than zero.
A tougher line is taken in Inequality by Design: Cracking the Bell Curve Myth (1996), written by six Berkeley sociologists. This book addresses Herrnstein and Murray's main argument--that intelligence is an important determiner of social outcomes in America. To their credit, the authors do some old-fashioned hard work, reanalyzing the NLSY data and even making one correction that strengthens The Bell Curve's conclusions. But their main effort is to show, by adding variables other than parental socioeconomic status to the mix of factors predicting outcomes, that intelligence is not as important as The Bell Curve claims. Murray has since responded to this argument in a pamphlet entitled Income Inequality and IQ (published by the American Enterprise Institute); there, by considering only the NLSY data from sibling groups, within which parental background is by definition equal, he is able to show that intelligence still has very strong effects.
The conclusion one may reasonably draw from these two books, and from Murray's response, is that while intelligence may matter more or less than family background, it certainly matters, and that if it is not entirely heritable, it is heritable in some degree. It is useful to bear this in mind when considering the other three books, for one would scarcely know from reading them that such a view has any reputable backing at all. Though a few chapters in Measured Lies (1996), the most vituperative and scientifically irrelevant of the five volumes under consideration, attempt data-based argumentation, most settle for sarcasm, self-righteousness, and name-calling. And then there are The Bell Curve Debate and The Bell Curve Wars (both published in 1995); the former is an anthology of historical documents and reviews, mostly negative, which the editors rightly claim represent the general trend among responses to Herrnstein and Murray's book; the latter is a set of essays, also mostly negative, that originally appeared in a single issue of the New Republic when The Bell Curve was first published, with a few similar pieces added for effect.
According to its back cover, The Bell Curve Wars "dismantles the alleged scientific foundations . . . of this incendiary book." Since, however, the vast majority of those commenting on The Bell Curve in the anthology's pages have little or no scientific authority, whoever wrote those last words probably had in mind the single entry by the Harvard zoology professor Stephen Jay Gould. That essay, entitled "Curveball," was originally published in the New Yorker and appears both in The Bell Curve Wars and The Bell Curve Debate, occupying the first position in each. In it, Gould repeats many of the same accusations of racism and attributions of political motive that he made in his 1981 book, The Mismeasure of Man, written in response to the earlier controversy sparked by Jensen and Herrnstein.
WITHIN THE social-science community and the academic world in general, Gould's critique has been widely accepted as the canonical demonstration that the concepts of intelligence and its heritability are at best nonscientific and at worst racist and evil. (For instance, all of the contributors to Measured Lies who cite Gould's essay do so approvingly, if we count the one who asserts that it does not go far enough.) Indeed, so well has The Mismeasure of Man endured that in 1996 its publisher reissued it with a new introduction and appendices, including the ubiquitous "Curveball," but left the main text essentially unrevised.
Gould charges that the craniometrists of the 19th century, and later intelligence researchers as well, operated from racist assumptions, and implies that on those grounds their work should be ignored or even suppressed. Insofar as the charge is meant to include figures like Herrnstein and Murray, it is absurd as well as malicious. But even in those cases in the past in which racist assumptions can indeed be demonstrated, the proof of the pudding remains in the eating, not in the beliefs of the chef. Useful science can proceed from all sorts of predispositions; nor--it seems necessary to add--do the predispositions of scientists always point in the same direction, especially where discussions of human nature are concerned.
Before World War II, for example, the anthropologist Margaret Mead, presumably basing herself on her observations of non-Western cultures, wrote: "We are forced to conclude that human nature is almost unbelievably malleable, responding accurately and contrastingly to contrasting cultural conditions." Later, Mead admitted that what forced this conclusion was not the data she had collected but the political goals she espoused: "We knew how politically loaded discussions of inborn differences could become. . . . [I]t seemed clear to us that [their] further study . . . would have to wait upon less troubled times." As the shoot-the-messenger responses of Gould and others show, the times may still be too troubled for the truth.
But what about Gould's main scientific contention--that, as he puts it in his 1996 introduction to The Mismeasure of Man, "the theory of unitary, innate, linearly rankable intelligence" is full of "fallacies"?
The theory that Gould is attacking usually goes under the name of general intelligence. Its advocates, practitioners of the hybrid psychological-statistical discipline known as psychometrics, argue simply that while individuals differ in their abilities in a wide range of intellectual realms, a relationship exists among these variations that can be attributed to a common factor. This common factor is what the psychometricians label general intelligence, or g.
A brief example will illustrate the evidence they adduce for this proposition. Suppose a group of students takes a set of ten, timed mental-ability tests, five based on verbal materials (such as choosing antonyms) and five based on spatial materials (such as drawing paths through mazes). Each student will receive ten scores, and each student will have a unique profile of scores, higher on some tests than others.
Now suppose we correlate mathematically the students' scores on the five verbal tests. We will probably find them positively, though not perfectly, correlated--that is, the score on one will predict reasonably well the scores on the others. With the aid of a statistical procedure known as factor analysis, we can examine the pattern of these positive correlations and infer that they can be explained by the existence of a common factor, the most logical candidate being the "verbal ability" of the students who took the tests. Analogous results would likely occur if we factor-analyzed the set of five spatial tests.
What if we combined all ten tests in a single analysis, looking at all the possible correlations? Most likely we would find separate verbal and spatial factors at work. But those factors themselves will almost always be correlated. A superordinate, or "general," factor--g--can then be extracted to account for the commonalities across all the tests, though this factor will be revealed more by some tests than by others; such tests, known as "highly g-loaded," are taken as especially good measures of general intelligence.
TO THE extent that it is not simply political, the debate that followed The Bell Curve and "IQ," and that lies at the heart of Gould's critique in The Mismeasure of Man, is over the very existence and coherence of general intelligence. Each side has made the same points over and over, and each side believes it has refuted the other side's arguments. The reason this is so is that the two sides proceed according to different definitions of intelligence.
The psychometric camp, which includes Herrnstein and Murray, Jensen, Eysenck, John Carroll (whose 1993 treatise, Human Cognitive Abilities, offers the most extensive factor-analysis of mental tests), and most psychologists who have traditionally studied the topic, hold to a conception of intelligence that closely matches what common sense and the dictionary tell us the term means. The opposing side, which sports a more eclectic set of disciplinary backgrounds and prides itself on a more sophisticated and inclusive perspective, divides human abilities into broad classes--logical, spatial, interpersonal, verbal, etc.--and labels each class an "intelligence." The two sides then proceed to talk past each other.
Scientists make bad dictionary writers and worse philosophers. Their main skills are in constructing experiments and generating explanations for what they observe. Neither of these endeavors requires agreement on what the words involved "mean" in any deep or absolute sense, only on ways of converting the elements of the theory at issue into operations that can be carried out in an experiment and repeated later if necessary. Measurement is the most important such operation; as Kelvin pointed out long ago, without a way to measure something it cannot be studied scientifically.
This is why the oft-repeated phrase, "intelligence is nothing more than what intelligence tests measure," is, as an objection, merely a tautology. The truth is that as long as intelligence can be reliably measured--it can be, with a variety of tests--and validly applied--it can be, to predict a variety of outcomes--it is intelligence. If we suddenly started calling it "cognitive ability," "cognitive efficiency," or even "the tendency to perform well on mental tests," it would still have the same scientific properties. Nothing about the natural world would change.
One way to test the schemes of the proponents of "multiple intelligences" would be to apply research techniques that might (or might not) suggest a role in them for general intelligence. But this is an exercise the advocates of multiple intelligences tend to rule out of consideration a priori. Thus, as Howard Gardner correctly notes in Frames of Mind (1983), there is good evidence that different parts of the brain are responsible for different abilities. However, when at a recent seminar a member of Gardner's research group was asked how abilities in the various intelligences are measured, the swift response was, "We don't measure them."
The reason is obvious: any reasonable system of measurement would produce a set of scores whose correlations could be calculated, and the pattern of those correlations would likely reveal a common factor--in other words, g--accounting for some fraction of the total variation. Gardner's theory is very popular in educational circles these days, as is the idea, espoused by Daniel Goleman in Emotional Intelligence (1995), that skill at managing one's own emotions and interpreting those of others is very valuable to social interaction and success in life. Surely both of these ideas are correct, as far as they go. But neither one of them addresses intelligence in a complete way.
ANOTHER CRITICISM of the notion of general intelligence is that it is based on factor analysis, an indirect procedure that deals with the structure of tests rather than the nature of the mind and brain. This is a point raised with special vehemence by Gould. In The g Factor: The Science of Mental Ability, Arthur Jensen shows that the objection is without foundation.5
The g Factor is a deep, scholarly work laden with hundreds of tables, graphs, and endnotes, some of them with tables and graphs of their own. It is balanced and comprehensive, summarizing virtually all the relevant studies on the nature of intelligence and demolishing most of the challenges and alternative explanations of the major findings. (It is not, however, an easy book for nonspecialists to read, which is why we are also fortunate to have Hans Eysenck's much more accessible and even entertaining Intelligence: The New Look.)
In refuting Gould's point, Jensen demonstrates that mental-test scores correlate not just with one another but with many measures of information-processing efficiency, including reaction time (how quickly you can press a button after a light flashes), inspection time (how long two separate line-segments must be displayed for you to judge accurately which is longer), and working-memory capacity (how many random items of information you can remember while doing something else). Jensen also reviews the many direct biological correlates of IQ, such as myopia (a very heritable condition), brain electrical activity, estimates of nerve-conduction velocity (the speed at which brain cells communicate with one another), and the brain's metabolism of glucose. Even brain size, the study of which is richly derided by Gould, has been found with modern imaging technology to correlate with IQ.
These chapters, among the most impressive in Jensen's book, put general intelligence as a psychological trait on a more solid foundation than is enjoyed by any other aspect of personality or behavior. They also speak persuasively to the issue of its heritability, the argument for which becomes more plausible to the extent that intelligence can be associated with biological correlates.
One can go farther. To Stephen Jay Gould and other critics, belief in the heritability of intelligence is inextricably--and fatally--linked to belief in g; destroy the arguments for one and you have destroyed the arguments for the other. But as Kevin Korb pointed out in a reply to Gould in 1994, and as Jensen affirms here, the g factor and the heritability of intelligence are independent concepts: either hypothesis could be true with the other being false. In some alternate reality, intelligence could be determined by wholly uncorrelated factors, or for that matter by wholly environmental (i.e., nonheritable) factors. It is simply less of a stretch to imagine that a general factor both exists and is somewhat heritable, since, as Jensen shows, this combination describes our own reality.
STILL ANOTHER line of attack used by the detractors of g is to point to studies allegedly showing that intelligence is easy to change (and, therefore, a meaningless concept). Arthur Jensen raised a firestorm three decades ago when he asked, "How much can we raise IQ and scholastic achievement?" and answered: not much. This brings us back to the Mozart effect, which purports to do in ten minutes what years of intensive educational interventions often fail to accomplish.
The Mozart effect was first shown in a study by Frances Rauscher, Gordon Shaw, and Katherine Ky that was reported in the British journal Nature in 1993. It is difficult to determine their experimental procedure with precision--their article was less than a page in length--but the essentials appear to be as follows. Thirty-six college students performed three spatial-ability subtests from the most recent version of the Stanford-Binet intelligence test. Before one of the tests, the students spent ten minutes in silence; before another, they listened to ten minutes of "progressive-relaxation" instructions; and before still another, they listened to ten minutes of Mozart's Sonata for Two Pianos in D Major (K. 448). The subjects performed the tests in different orders, and each test was paired with equal frequency against each listening option. The results, when converted to the scale of IQ scores: 110 for silence, 111 for relaxation, and 119 for Mozart.
"Mozart makes you smarter!" said the press releases as new classical CD's were rushed to market. A self-help entrepreneur named Don Campbell trademarked the phrase "The Mozart Effect," published a book by the same name, and began selling cassettes and CD's of his own, including versions designed specially for children. Frances Rauscher testified before a congressional committee and gave many press interviews.
What was wrong with this picture? The article in Nature did not give separate scores for each of the three Stanford-Binet tasks (necessary for comparative purposes), and it used dubious statistical procedures in suggesting that listening to Mozart enhanced overall "spatial IQ" or "abstract reasoning." Nor did the researchers analyze separately the first task done by each subject, to rule out the possibility that prior conditions may have influenced the Mozart score. Finally, they claimed that the effect lasted for only ten to fifteen minutes, but gave no direct evidence; since the subjects were apparently tested only immediately after each listening episode, there was no way to see how this interval was calculated.
IN AN attempt to reproduce the finding that classical music enhances "abstract reasoning," Joan Newman and her colleagues performed a simple experiment: each of three separate groups comprising at least 36 subjects completed two separate subsets of Raven's Matrices Test (a good measure of g) before and after listening to either silence, relaxation instructions, or the Mozart-effect sonata. All three groups improved from the first test to the second, but by the same amount; in other words, Mozart was of no particular help. In another experiment along the same lines, a group led by Kenneth Steele asked subjects to listen to ever-longer strings of digits and repeat them backward; it, too, found no benefit from prior exposure to Mozart. Other independent tests reported similar failures or equivocal results.
In response to these experiments, Rauscher and Shaw have considerably narrowed the scope of their original findings. They now concede that the post-Mozart increase in spatial performance occurred on just one of the three Stanford-Binet tasks, while on the others, varying the listening condition made no difference. According to their revised estimate, only "spatiotemporal" tasks, which require the transformation of visualized images over time, are affected by complex music, not spatial ability or reasoning in general.
Unfortunately, however, neither Nature nor any journal of similar stature has given space to the follow-up experiments, most of which have been reported in Perceptual and Motor Skills or other low-prestige journals that many psychologists never read. And the media have of course moved on, leaving the babies of Georgia with state-sponsored gifts and the public with the vague idea that if ten minutes of music can "make you smarter," then IQ cannot signify very much.
Similarly feeding this mistaken impression are other recent examples of brief treatments affecting performance either negatively or positively on IQ-type tests. Thus, the researchers Claude Steele and Joshua Aronson told one group of black undergraduates at Stanford that a difficult verbal test would diagnose their abilities and limitations, and another group that their answers would be used only for research on verbal processing. In four separate experiments, students did worse under the former conditions than under the latter. Analogous results have been obtained with Asian female students in math tests: stressing to them that the test measures the abilities of their sex reduces their scores (women typically do worse than men in math), but stressing that it measures the abilities of their ethnic group increases their scores (Asians typically do better than other groups). But as Jensen points out, in such cases we are dealing with a stereotype about group differences that serves to increase or decrease test anxiety. That performance goes down when anxiety gets too high is a common enough finding in testing research, and says nothing about g.
What all these experiments do illustrate is that the human brain is a dynamic system whose functioning can change quite quickly. But this is not the same thing as changing intelligence itself. A few weeks of Prozac or another modern antidepressant can radically alter a person's behavior, but we still accept that his basic identity has not changed--he is still the man we knew before. Intelligence, too, is a stable characteristic of a person's behavior across a wide range of situations. He will be able to perform a task much better in one context than in another, with special training than without; but he is still the same person, and his intelligence is also still the same. Although the Mozart effect was promoted as though it were bad news for The Bell Curve and IQ, it is not.
AND NEITHER, finally, is the much-talked-about "Flynn effect." Over the past 50 years, the average score on intelligence tests has risen about three points per decade. This means that we are all getting smarter--indeed, the average adult of 1998 is, psychometrically at least, smarter than 84 percent of the population was in 1948.
The Flynn effect is named after James Flynn, who has been studying it for over fifteen years (although the phenomenon had been noted as early as the 1930's). In a chapter of a new book, The Rising Curve: Long-term Gains in IQ and Related Measures,6 Flynn notes that gains are occurring steadily in every country sampled, mainly in the West and the industrialized parts of Asia, though their size and specific nature varies in different cases. He believes the start of the increases coincided with industrialization in the 19th century, though the data are of course less reliable the farther back one goes. What he does not know is why the gains have been occurring, and the other contributors to The Rising Curve can offer only tentative theories at best.
Psychologists, like all scientists, prefer to test their theories with controlled experiments, but such experiments cannot be performed when the phenomenon to be explained is occurring throughout the world continuously over time. The difficulty in comparing times is that many things have changed with the times: the items on IQ tests are different; education is different; nutrition is better; airplanes, cars, radio, television, movies, computers, and the Internet have been invented; society has become more permissive and also more rewarding of risk-taking; testing is more widespread and people are more accustomed to being tested; birth rates are lower; and so on. Encompassing all of these time-correlated variables, the change in what might be called our cognitive environment has been simply tremendous over the past 150 years. The most relevant factors here are probably better nutrition--a topic Eysenck studied at the end of his career--plus greater, more diverse, and more complex stimulation of the brain by our everyday experiences.
Evidence of such a dramatic environmental effect on IQ scores should reinforce skepticism concerning a genetic basis for group differences. But, in any case, psychometric theory makes no claims about average or absolute levels of intelligence within or between populations, and behavioral genetics allows for complex environmental influences on traits that are still significantly heritable. And so, again contrary to popular belief, the concept of general intelligence remains as sound and as meaningful as ever, Flynn effect or no.
HAVING WITHSTOOD so many attacks, will the psychometric study of intelligence survive? Alas, not necessarily. In a pattern reminiscent of an earlier episode in the annals of modern psychology, the impact of Stephen Jay Gould's critique has been reinforced by the lack of a forceful response to it by psychometricians themselves, leaving the impression even within psychology at large that general intelligence has been routed.
Just as Gould, a paleontologist, has chided psychologists for misunderstanding genetics, so, in a review of B.F. Skinner's Verbal Behavior in 1959, the linguist Noam Chomsky chided behavioral psychologists for misunderstanding language. Like Gould, who has caricatured and ridiculed the notion of general intelligence and the factor analysis used to document it, Chomsky caricatured the tenets and methods of behaviorism, which argued that the task of psychology is to measure only behavior and to explain it only in terms of environmental and genetic causes, without referring to what goes on inside the head.
It took eleven years before a leading behaviorist, Kenneth MacCorquodale, answered Chomsky; the reason none of his colleagues had bothered to reply earlier, he explained, was that they found Chomsky's arguments simply uninformed and irrelevant to the work they did. In the meantime, however, Chomsky's review was widely read and subscribed to by the new wave of cognitive psychologists who were building a framework for psychology that remains dominant today.
Gould's book seems to have had a similar effect on young intelligence researchers. Although Jensen and several others did review The Mismeasure of Man very negatively at the time it appeared, like MacCorquodale they replied in obscure journals read mainly by their own supporters. Thanks in part to Gould's influence (and, of course, to the outrage directed against Jensen and Herrnstein in the 70's), the most popular new theories in the 1980's came to minimize the role of general intellectual ability in favor of other factors, to posit multiple "intelligences," and to give little attention to heritability. Now Eysenck, one of the heroes of psychometrics, and Herrnstein, one of its leading supporters, have died, Jensen and Carroll are approaching the end of their careers, and the psychometricians risk going into the same sort of extended bankruptcy proceedings as the behaviorists before them.
The great irony is that this is occurring just as the field of behavioral genetics has begun to thrive as never before. One of its most striking successes has been to document, through the convergence of numerous family and twin studies, the heritability of intelligence. Now researchers have been able to identify a specific gene whose variations are associated with differences in intelligence. This is a crucial step in building a complete theory of intelligence that can explain individual differences in biological as well as psychological terms. But the new generation of cognitive scientists, who focus on characteristics of the mind and brain that are common to everyone, are not too interested in differences among people, while the psychometricians, who stand to be vindicated, have been sidelined on their own playing field.
THE MOST basic claim put forth by Herrnstein and Murray was that smart people do better than dumb people. What is so troubling about that? We rarely encounter an argument over the fact that beautiful people do better than ugly people, or tall people better than short ones, though each of these propositions is also true. Is an intellectual meritocracy less just or moral than a physical one?
The answer, unfortunately, is that whenever intelligence is said, "race" is heard; whenever race is said, "genetics" is heard; and whenever genetics is said, "inferiority" is heard--even though these issues are not necessarily connected in any way. When I mentioned to friends that I was writing an article on intelligence, many were surprised, and some wanted to know why. I can only imagine how Herrnstein was treated by his colleagues during the last 25 years of his life. The public protests may have bothered him less than the fact that people in his own community never thought of him in the same way again: he had disturbed a pleasant conversation by bringing up unpleasant facts.
Since The Bell Curve, intelligence is stronger than ever as a scientific concept, but as unwelcome as ever as an issue in polite society. It would be reassuring to think that the next twenty years, which promise to be the heyday of behavioral genetics, will change this state of affairs. But if the past is any guide, many more phony controversies lie ahead.
CHRISTOPHER F. CHABRIS, here making his first appearance in COMMENTARY, is a Ph.D. candidate at Harvard
specializing in cognitive neuroscience. He is at work on a book
about the chess-playing machine Deep Blue and artificial intelligence.
[Catalogue] | [What's New] |
PIANO, n. A parlor utensil for subduing the impenitent visitor. It is operated by pressing the keys of the machine and the spirits of the audience. [The Devil's Dictionary A.B.] |
The Bell Curve Intelligence and Class Structure in American Life by Richard J. Herrnstein and Charles Murray Free Press, 845 pp. $30.00
Part I
I promised last time to try to say what is in this book but my review is getting as long as the book so I will give only Part I this time and continue next time.
INTRODUCTION
Tests to measure intelligence (called cognitive ability here) play a central role in this book. Thus, in their introduction, the authors discuss the history and the controversies surrounding attempts to measure intelligence. Modern theory traces its beginnings to Spearman. Spearman noticed that performances on tests attempting to measure intelligence were positively correlated. To explain this, he postulated the existence of a single variable that he called g which is a persons general intelligence. It is a quantity like height or weight that a person has and that varies from person to person. When you take a test to measure intelligence your score is a weighted sum ag + bs + e of the factors g , s, and e with g your general intelligence, s a measure of your intelligence relating to this particular test and e a random error. If you take several different tests, g is common to all of them and causes the positive correlation. The magnitude of a tells you how heavily the test is "loaded" with general intelligence -- the more the better. This simple model is only consistent with a very special class of correlation matrices (those with rank 1) and so had to be generalized to include more than one kind of g. This led to the development of factor analysis as a mathematical model for what is going on. It also led to the development of IQ tests to measure intelligence.
The controversies over the use of IQ began when it was proposed that they be used to justify sterilization laws in an attempt to eliminate mental retardation, and immigration laws to favor the Nordic stock. It continued when Arthur Jensen suggested that remedial education programs (begun in the War on Poverty) did not work because they were aimed at children with relatively low IQ, largely inherited and therefore difficult to change. Then followed debates over whether differences in IQ were due mostly to genetic difference or to differences in environment culminating in Stephen Jay Gould's best seller "The Mismeasure of Man". Gould concluded that "deterministic arguments for ranking people according to a single scale of intelligence, no matter how numerically sophisticated, have recorded little more than social prejudice." While the authors admit that Gould's ideas still reflect a strong public sentiment about IQ tests, they feel that it bears very little relation to the current state of knowledge among scholars in the field.
Finally, the authors discuss current attempts to understand intelligence, describing three different schools.
THE CLASSICIST: intelligence as a structure.
This school continues to extend the work of Spearman using factor analysis and assuming, as Spearman did, that some kind of general intelligence is associated with each individual. Workers in this school continue to try to understand the physiological basis for the variables identified by factor analysis and to improve methods of measuring general intelligence.
THE REVISIONISTS: intelligence as information processing.
This school tries to figure out what a person is doing when exercising intelligence, rather than what elements of intelligence are being put together. A leading worker in this field, Robert Sternberg writes: "Of course a tester can always average over multiple scores. But are such averages revealing, or do they camouflage more than they reveal? If a person is a wonderful visualizer but can barely compose a sentence, and another person can write glowing prose but cannot begin to visualize the simplest spatial images, what do you really learn about those two people if they are reported to have the same IQ?"
THE RADICALS: the theory of multiple intelligences.
This school led by Howard Gardner, rejects the notion of a general g and argues instead for seven distinct intelligences: linguistic, musical, logical- mathematical, spatial, bodily-kinesthetic, and two forms of "personal" intelligence. Gardner feels that there is no justification for calling musical ability a talent and language and logical thinking intelligence. He would be happy calling them all talents. He claims that the correlations that lead to the concept of g come precisely because the tests are limited to questions that call on these two special aspects of intellegence.
Herrnstein and Murray consider themselves classicist and state that, despite all the apparent controversies, most workers in the field of psychometrics would agree with the following six conclusions that they feel are consequences of classical theory.
(1) There is such a thing as cognitive ability on which humans differ.
(2) All standardized tests of academic aptitude or achievement measure this general factor to some degree, but IQ tests expressly designed for that purpose measure it most accurately.
(3) IQ scores match, to a first degree, whatever it is that people mean when they use the work intelligent or smart in ordinary language.
(4) IQ scores are stable, though not perfectly so, over much of a person's life.
(5) Properly administered IQ tests are not demonstrably biased against social economic, ethnic, or racial groups.
(6) Cognitive ability is substantially heritable, apparently no less than 40 percent and no more than 80 percent.
The authors stress that IQ tests are useful in studying social phenomena but are "a limited tool for deciding what to make of any individual."
THE DATA USED IN THIS BOOK
Throughout the book the authors make use of data from the National Longitudinal Survey of Youth (NLSY) started in 1979. This was a representative sample of 12,686 persons ages 14 to 21 in 1979. This group has been interviewed annually and the authors use the data collected through 1990.
REVIEWER NOTE
While this study was meant to follow labor trends, a number of other groups used the subjects for their studies. One of these provided the IQ data necessary for this book. The army had been using a test called the Armed Services Vocational Battery (ASVB) since 1950 to help in the selection of recruits and for special assignments. It had been suggested that the volunteer army was selecting a group less well qualified than the Army obtained by the draft. To check this they decided to administer the ASVB to the sample chosen for the NLSY. This study was called the Youth Profile and was administered by the National Opinion Research Council. The results showed that the volunteer army was getting a higher quality army, as measured by these tests, than the draft, but at the same time found significant differences between the performance of various ethnic groups. A study of these differences and a summary of explanations for them are provided in "The profile of American youth : demographic influences on ASVAB test performance" by R. Darrell Bock and Elsie G.J. Moore. It is interesting to compare their analysis with that of the authors of this book.
The ASVB has ten subtests which vary from tests you might find on an IQ vocational test, such as automobile repair and electronics. The Armed Forces Qualification Test (AFQT) is made up of the four subtests of the ASVB: word knowledge, paragraph comprehension, arithmetic reasoning, mathematical knowledge. The authors show in an appendix that this test has the properties of a very good IQ test . In particular, they found that over 70% of the variance on the AFQT could be accounted for by a single factor, g, which they identify with general intelligence.
CHAPTER I. COGNITIVE CLASS AND EDUCATION 1900-1990
In this part the authors provide a number of graphs that show there is a cognitive sorting process going on in education. While many more students are going to college, a higher and higher proportion of the really bright students are going to a few select schools. We see graphs that exhibit the following:
(1) In the twentieth century the prevalence of college degrees increased rather continuously from 2% to 33%.
(2) From 1925 to 1950 about the same percentage (55%) of the top IQ quartile of the high school graduates went to college. Starting in 1950, this percentage increased dramatically from 55% to over 80% in 1980
(3) In 1930 the mean IQ scores for all those attending college were about .7 standard deviations above the mean and those attending Ivy League and Seven Sisters colleges were about 1.3 standard deviations above the mean. In 1990 the mean IQ for all attending college had remained about same a .8 standard deviations, while the mean IQ for the Ivy League and Seven sisters mean had increased to 2.7 standard deviations above the mean.
Since these graphs display standardized scores, the authors spend some time explaining the concepts of mean and standard deviation in the test and provide a more complete discussion in an appendix.
The authors express the fear that the clustering of the high IQ students in a small number of colleges will make them isolated from and unaware of the real world.
CHAPTER 2. COGNITIVE PORTIONING BY OCCUPATION
The point of this chapter is that jobs sort people by cognitive ability in much the same way that colleges do. A group of twelve professions: accountants, architects, chemists, college teachers, dentists, engineers, lawyers, physicians, computer scientists, mathematicians, natural scientists, social scientists are considered to be "high-IQ professions". The mean IQ of people entering these professions is said to be about 120, which is cutoff point for the top decile by IQ.
The authors provide a graph showing that, until 1950, about 12% of the top IQ decile were in these jobs. Then the percentage significantly increased, reaching about 38% in 1990. They link this to education with another graph showing that the proportion of the CEO's with graduate training remained around 10% until 1950 when the proportion increased dramatically to about 60% in 1976. Combining these observations the authors conclude that, at mid-century, the bright people were scattered throughout the labor force but, as the century draws to a close, a very high proportion of these people are concentrated within a few occupations paralleling the cognitive portioning by education.
CHAPTER 3. THE ECONOMIC PRESSURE TO PARTITION
This chapter is devoted to showing that IQ is a good predictor of job performance. It is the first use of correlation, and an appendix devoted to explaining the concept of correlation is available for the reader not familiar with this concept.
The authors discuss a number of studies showing that the correlation between IQ and job performance is typically at least .4 and often more. They point out that the military offers huge data sets for these studies, since everyone in the military must take the ASVB tests (and hence also the AFQT IQ test ) and members of the military attend training schools where they are measured for "training success" at the end of their schooling, based on measures that amount to job assessment skills and knowledge. In these studies, the average correlation between IQ and job performance is about .6. By looking at the high correlation between the g factor for the IQ test and job performance, they conclude that the g factor is the key to success in these jobs.
Modern studies in the civilian population are typically done by meta-analysis of small studies leading to results similar to those found in the military studies. An exception was in a report of the National Academy of Sciences "Fairness in Employment Testing", which reported a correlation of only about .25. The authors suggest that this is because researchers for this study did not apply corrections for restricted range which they feel was appropriate for the purposes of their study. (Restricted range means that your sample did not include reasonable numbers from the entire range of possible scores). When these corrections are made they say that the correlation would increase to around .4, consistent with other studies.
The authors also compare various predictors for job performance and report the results of a study that showed that the highest correlation between a predictor and job performance rating was the cognitive test score (.53) followed by biographical data (.37), reference checks (.26), education (.22), interview (.14), college grades (.11) and interest (.10).
The chapter concludes by remarking that the Supreme Court decision of Griggs v. Duke Co. in 1971, which severely limited the use of IQ tests for job selection, is costing the American economy billions of dollars.
REVIEWER NOTE.
The main issues referred to in the Griggs v. Dude decision is the possibility of so-called "disparate- impact" lawsuits. These are lawsuits that challenge employment practices that unintentionally but disproportionately affect people of a particular race, color, religion, sex or national origin.
The supreme court has twice changed the ground rules set up in the Griggs v. Duke decision. The current rules related to these suits are governed by the Civil Rights act of 1991. According to this law, if a plaintiff shows that a specific part of the employment practice disproportionately affects a particular group, then the employer must be able to demonstrate that the employment practice or criterion in question is consistent with a business necessity (whatever that means).
In order to prove disparate impact using statistical comparisons, the comparison must be with racial compositions of the qualified people in the work force, not the racial composition of the entire work force.
When multiple employment criteria are required and it can be argued that they cannot be separated, then the entire employment process may be challenged if it can be shown to have a disproportionate effect on a particular group.
For a more detailed discussion of this legislation see "New Act Clarifies Disparate Impact Law", Casey and Montgomery, The National Law Journal , March 9, 1992.
CHAPTER 4. STEEPER LADDERS, NARROWER GATES
To illustrate the concept of steeper ladder, narrower gate we are provided a graph showing that the salary of Engineers was rather constant from 1930 to 1953, at about $30,000, and then increased dramatically to about $70,000 in 1960. During the same period, the salaries for manufacturing employees showed only a gradual increase from $10,000 to $20,000.
The authors point out that the labor statistics have pointed to a mysterious "residual" in trying to account for the increased spread that occurred in real wages between 1963 and 1987, even after taking into account education, experience, and gender. Not surprisingly they suggest a case for IQ being this residual.
The authors also, in this chapter, deal with the issue of heritability of IQ. They describe the various kinds of studies which have been used to study this problem. They remark that the technical issues in measuring heritability are too formidable to get into, so they only explain how heritability is measured in one important special case, namely in studies of identical twins reared apart. Here, since the twins have identical genes and different environments it seems reasonable to define the heritability to be the correlation between the twins' IQ scores, which is found to be around .75. They say that other studies typically provide lower values for heritability but seldom under .4.
Part II: Cognitive Classes and Social Behavior
We continue our attempt to describe what is in this lengthy book. This has become less necessary with the appearance of a review by someone who has read the book. This review is by Stephen Jay Gould and appeared in the November 28 issue of the New Yorker Magazine (Page 139). However, we shall not give up quite yet.
Part II is almost entirely based upon the National Longitudinal Survey of Youth (NLSY). Recall that this study began in 1979 and follows a representative sample of about 12,000 youths aged 14 to 22. It provides information about parental socioeconomic status and subsequent work, education, and family history. It also had IQ information because, in 1980, the Department of defense gave the participants their battery of enlistment tests to see how this civilian sample compared with those in the voluntary army.
In part II the authors seek to see how IQ is related to social behavior. They limit themselves to non-Latino whites to avoid the additional variation of race which they treat in Part III. Their method is to carry out a multiple correlation analysis with the independent variables being cognitive ability and the parents socioeconomic status (SES) (based on education, income, and occupational prestige) and a dependent variable which, in the first chapter, is poverty. The next seven chapters replace poverty successively with education, unemployment, illegitimacy, welfare dependency, parenting, crime, and civil behavior.
IQ scores are standardized with mean 100 and standard deviation 15 and NLSY youths are divided into 6 groups corresponding to intervals determined by the 5th, 25th, 75th, and 95th percentiles. Those in these six groups are labeled very dull, dull, normal, bright, very bright. In the same way the NLSY youths are also divided into six groups by the 5th, 25th, 75th and 95th percentiles using the SES index and labeled very low, low, average, high, very high.
The authors find that the percentages in poverty for each of the six socioeconomic groups, going from very low to very high are 24, 12, 7, 3, 3. The percentage in poverty for each of the six IQ classes, going from very dull to very bright, are 30, 16, 6, 3, 2 They observe the similarity of these percentages and turn to multiple regression to attempt to see which is more directly related to poverty.
For this they carry out a logistic regression with the independent variables age, IQ, and SES, and dependent variable poverty. Giving IQ and SES, age has little effect so they concentrate on IQ versus SES. To do this they plot two curves on the same set of axes, an IQ curve and an SES curve. The IQ curve considers a person of average age and SES and plots the probability of poverty as IQ goes from low to high. The SES curve considers a person of average age and IQ and plots the probability of poverty as SES goes from low to high. The IQ curve shows about 26% probability of poverty for very low IQ decreasing to about 2% probability of poverty for very high IQ. The SES curve indicates about 11% probability of poverty for very low SES score and decreases to about 5% probability of poverty for very high SES score. Thus fixing SES and varying IQ has a significant effect on poverty but fixing IQ and varying SES does not have much effect on poverty. It is argued that this shows that IQ is more directly related to poverty than is socioeconomic status.
This same procedure is carried out in the subsequent chapters to show that being smart is more important than being privileged in predicting if a person will get a college degree, be unemployed, be on welfare, have an illegitimate child etc. There are some exceptions to this but the general theme of part II is that it is IQ and not socioeconomic status that is important in predicting these social variables.
Little in said about variation while discussing these examples but in the introduction to part II the authors remark that "cognitive ability will almost always explain less than 20 percent of the variation among people, usually less than 10 percent and often less than 5 percent." (They give all the regression details in an appendix). "Which means that you cannot predict what a given person will do from his IQ score. On the other hand, despite the low association at the individual level, large differences in social behavior separate groups of people when the groups differ intellectually on the average."
Discussion questions.
(1) What is the basis for the author's argument that, "even though cognitive ability explains only a small percentage of the variation among people, large differences in social behavior separate groups of people when the groups differ intellectually on the average"?
(2) In the introduction to part two the authors state that "We will argue that intelligence itself, not just its correlation with socioeconomic status, is responsible for group differences. ". What statistical evidence would allow the authors to conclude this?
Part III The National Context
Part III discusses difference in performance on intelligence tests within ethnic groups and between ethnic groups. The authors start by pointing out they have already shown that differences in cognitive abilities within a group (in particular within the white group analyzed in Part 2) can be very large and that this fact has political repercussions. They remark that the differences within ethnic classes are much larger than between classes, so any problems that these differences cause would not go away in a homogenous population.
In Chapter 13 they describe the differences between groups as they see it. They say that studies suggest that Asians have a slightly higher IQ on average than whites but these results are not conclusive. On the other hand, studies have consistently shown that blacks have an average IQ of about one standard deviation less than for whites. The difference in the NLSY data (that the authors used throughout the book) was 1.21 standard deviations. The authors remark that a difference of one standard deviation allows for a lot of overlap in the distributions of IQ scores between blacks and whites. In particular, there should be about 100,000 blacks with IQ scores 125 or above. On the other hand since there are six times as many whites as blacks in the United States the disproportion's between whites and blacks at the higher levels become very large.
The authors ask if these differences are authentic? They ask first if they could be due to cultural bias or other artifacts of the test. Studies that conclude that this is not the case are discussed briefly here and in detail in an appendix. They next ask if the differences are due to socioeconomic status. Looking at the NLSY data and controlling for socioeconomic difference they find that socioeconomic status explains 37 percent of the difference. They remark that "controlled" is hard to interpret here since socioeconomic status can also be a result of IQ.
They suggest that if the differences were socioeconomic then the gap should decrease as their measure of socioeconomic status increases. They present a graph for their data showing this is not the case. They remark that the difference between black and white IQ does appear to be decreasing in time and attribute to this to environmental changes.
They next turn to the question of whether the difference is do to genetics or environment or both. They point out that there is no consensus on this question. They present the arguments for and against a genetic explanation. The arguments presented make it clear why it is difficult to claim a solution to this problem. For all the obvious explanations they present studies to show that these explanations don't hold up. They conclude this discussion with an uncharacteristically bold statement: "In sum: If tomorrow you knew beyond a shadow of a doubt that all cognitive differences between races were 100 percent genetic in origin, nothing of any significance should change...The impulse to think that environmental sources of difference are less threatening than genetic ones is natural but illusory."
The arguments presented in this chapter are well know and I feel better presented in the book "Intelligence" by Nathan Brody, Academic Press 1992.
Chapter 14 looks at what happens when you control for IQ. Here the authors find that this removes the difference for some variables and not for others. For example, after controlling for IQ, the probability of graduating from college is higher for blacks as is the probability of being in a high-IQ occupation and wage differentials shrink to a few hundred dollars. Controlling for IQ does not change significantly the difference in black-white marriage rates, or welfare recipiency. It does reduce significantly the difference for the proportion of children living in poverty and for those who are incarcerated.
Chapter 15 is entitled "The Demography of Intelligence". The fact that women with low IQ have more children than those with high IQ and changes in the immigrant population suggest to the authors that demographic trends are exerting downward pressure on the distribution of cognitive ability in the United States. They point out that this has been a difficult matter to settle. The "Flynn effect" says that in some sense IQ scores increase worldwide with time. However, the authors conclude that there are worrisome trends in the demographic effects even though there may well be improvements in the cognitive abilities by improved health and education.
Chapter 16 considers the relation between low cognitive ability and social problems. The authors confess that causal relations are complex and hard to establish definitely. This leads them to simply ask if persons with serious social problems tend to be in the lower IQ groups. Looking at the NLSY data they present graphs with the x-axis the ten IQ deciles and the y-axis the proportion of the people with a specific problem. They start with poverty. The bar chart starts with about 30 percent poverty among the lowest IQ decile and decrease to about 7% in the highest IQ decile. When the dependent variable is high school dropouts, men interviewed in jail, or women who had receive welfare, the result is the same. After these and many more indications that low IQ is associated with trouble, the authors conclude with their Middle Class Values Index. To qualify for "yes" answer an NLSY person had to be married to his or her first spouse, in the labor force (if man), bearing children within wedlock (if a woman), and never have been interviewed in jail. Here there are respectable proportions of those saying "yes" in all IQ deciles which the authors remark should remind us that most people in the lower half of the cognitive distribution are behaving themselves.
DISCUSTION QUESTIONS:
(1) In the Bell Curve we find the following statement: ÒThe most modern study of identical twins reared in separate homes suggests a heritatbility for general intelligence between .75 and .80Ó. This apparently accounts for their upper bound when they say say later that the heritability of IQ falls in the range of .4 to .8. On the other hand, the .75 and .8 is actually a correlation. What seems wrong here?
(2) What do you think of the "Middle Class Values Index" as defined by Hernstein and Murray?
Part IV Living Together
Part IV of "The Bell Curve" begins with Chapter 17 which discusses attempts to improve IQ scores. Some early studies suggested that better nutrition did not improve IQ. However, while these were large studies they werenot controlled studies. Two more recent controlled studies, one in Great Britain and another in California, showed a significant difference for the group given vitamin and mineral supplements compared to the group given placebo. In the California study, the average benefit for providing the recommen- ded daily allowances was about four points in nonverbal intelligence. The authors feel that improved nutrition is effective but suggest that there are still questions about how effective.
Next, the role of improved education in raising IQ scores is considered. The authors cite studies suggesting that the worldwide increase in average IQ can be attributed to increased schooling. They conclude that variation in the amount of schooling accounts for part of the observed variation of IQ scores between groups.
The authors discuss the various studies to see how improving educational opportunities might increase IQ. They start with the negative results obtained by the Coleman study, a large national survey of 645,000 students. This survey did not find any significant benefit to IQ scores that could be credited to better school quality. (A discussion of this report is on the video series "Against All Odds").
Studies on the Head Start Program showed that this program increased IQ significantly during the period of the program, but that these differences disappeared over time. The authors mention some other more positive results but conclude that, overall, what we know about this approach is not terribly encouraging.
More positive results are cited for the hypothesis that IQ scores can significantly be improved by adoption from a poor environment to a good environment. One meta-study concluded that the increase in IQ would be about 6 points. Two small studies in France suggested that a change in environment from low socio-economic status to high socio-economic status could result in as much as a 12-point increase in IQ.
Chapter 18 is titled "The Leveling of American Education". The authors begin with a look at what test scores say about the changes in student's abilities from the 50's to the present. They present a graph of the composite score of Iowa 9th-graders on the Iowa Test of Basic Skills. The graph shows a steep improvement from the 50's to the 60's, followed by a significant decline until the 70's followed by steady improvement to a new high by the 90's. Graphs of national SAT scores show that these scores remained about the same from the 50's to the 60's and then declined significantly (about 1/2 standard deviation on verbal and 1/3 standard deviation on math) from the 60's to the 80's and then remained about the same from the 80's to the 90's.
The authors argue that the familiar explanation which claims that the great decline in SAT scores was caused by the "democratization" during the 60's and 70's is not correct. They point out that the SAT pool expanded dramatically during the 50's and 60's while average scores remained constant. In addition, throughout most of the white SAT score decline the white SAT pool was shrinking, not expanding.
They next look at what has happened to the most gifted students. They provide a graph showing the percentage of 17-year olds who scored 700 or higher on the SAT scores. The percentage for math scores decreased from 1970 to 1983 and then increased to their highest ever in 1990. Verbal scores decreased during this first period and remained steady after that.
They give the following explanation for the changes illustrated by these graphs. The decline in both the Iowa scores and the SAT scores of the 60's are attributed to what they call the "dumbing down". This period was characterized by simplifying the text books -- fewer difficult words, easier exercises, fewer core requirements, grade inflation etc.
They suggest that the "dumbed down" books would actually help the lower end of the spectrum of students and so would account for the increase in overall preparation indicated by the Iowa scores from the 80's to the 90's. The verbal SAT scores did not increase because of the use of the dumbed down books, the increased use of television, and decrease in writing generally, including letter writing. The math SAT scores did not decrease during this period because algebra and calculus are more constant subjects and harder to dumb down.
In their discussion of policy implications, they are not very optimistic about new government policies being able to solve general education problems. They point out that surveys have shown most American parents do not support drastic increases in their children's work load and, in fact, that the average American has little incentive to work harder. They argue that educators should return to the idea that one of the chief purposes of education is to educate the gifted and "foster wisdom and virtue through the ideal of the educated man".
Chapter 19 is on affirmative action in higher education. The authors present statistics on the differences in SAT score between various groups. Evidently these statistics are more easily obtained from private schools than from public schools. Their first graph shows how the average SAT scores of blacks and Asians differ from whites for entering students at a group of selective schools. The median total SAT score for blacks was 180 points less than for the whites, the median for Asians was 30 points higher than for the whites. The range of difference for blacks went from 95 (Harvard) to 288 (Berkeley). Data for students admitted to medical schools and law schools also show significant differences. In all cases average test scores for those admitted tends to follow the differences in the scores nationally.
The authors give the three reasons for academic institutions to give an edge to black students: institutional benefit, social utility, and just desserts. Accepting these, they propose a way to determine a reasonable advantage by trying to decide between two students differing only as to minority or white and privileged or underprivileged.
They show that black enrollments in college increased dramatically after the 60s, when affirmative action was introduced. It dropped off in the late 70's and has pretty well leveled off with a slight increase since then. Thus, they say that affirmative action has been successful in getting more minority students into colleges. However, they feel that the differences in performance, drop out rates, and the way that students, black and white, view these differences is harmful. The authors feel that such would not be the case if the admission policies were changed to continue to make a serious effort to attract minority applications but adopt an admission policy that would not make such large differences between the SAT distributions. The result would be a more consistent performance among the various groups and more harmony among the student body.
Chapter 20 considers in a similar way, affirmative action in the workplace. As in the case of education, the authors argue that affirmative action has had the desired effect of removing disparities in job opportunities and wages that were obviously due to discrimination. However, they look at data that suggest the results of affirmative action have gone beyond that to give a significant advantage to blacks in clerical jobs and even more so in professional or technical jobs, at least in terms of groups with comparable IQ scores.
Their previous research on relation of IQ to job performance leads them to conclude that this has serious economic implications. They feel it leads to increased racial tensions. They conclude that anti- discrimination laws should be replaced by vigorous enforcement of equal treatment of all under the law.
Chapter 21 is entitled The Way we Are Headed. The authors return to their earlier concerns that we are moving in the direction of (a) an increasingly isolated cognitive elite, (b) a merging of the cognitive elite with the affluent, and (c) a deteriorating quality of life for people at the bottom end of the cognitive ability distribution. This leads them to some pretty gloomy predictions. In the final chapter, called "a place for everyone", they give their ideas on how to prevent this. A somewhat simplified version of the author's views is: we should accept that there are differences, cognitive and others, between people, and figure out ways to make life interesting and valued for all, in terms of the abilities that they do have.
DISCUSSION QUESTIONS
In the California nutrition study, some of those in the treated group had a large increase, about 15 points, in their verbal scores and some had no increase at all. Why might some not have had any increase?
The authors review the evidence that coaching increases SAT scores.
They cite a recent survey of the studies that suggested that about 60
hours of studying and coaching will increase combined math and verbal
scores on average of about 40 points. Does this seem consistent with
what you have experienced or know about coaching for SAT scores?
22, 2002 |
1
Five years ago, several executives at McKinsey & Company, America's largest and most prestigious management-consulting firm, launched what they called the War for Talent. Thousands of questionnaires were sent to managers across the country. Eighteen companies were singled out for special attention, and the consultants spent up to three days at each firm, interviewing everyone from the C.E.O. down to the human-resources staff. McKinsey wanted to document how the top-performing companies in America differed from other firms in the way they handle matters like hiring and promotion. But, as the consultants sifted through the piles of reports and questionnaires and interview transcripts, they grew convinced that the difference between winners and losers was more profound than they had realized. "We looked at one another and suddenly the light bulb blinked on," the three consultants who headed the project—Ed Michaels, Helen Handfield-Jones, and Beth Axelrod—write in their new book, also called "The War for Talent." The very best companies, they concluded, had leaders who were obsessed with the talent issue. They recruited ceaselessly, finding and hiring as many top performers as possible. They singled out and segregated their stars, rewarding them disproportionately, and pushing them into ever more senior positions. "Bet on the natural athletes, the ones with the strongest intrinsic skills," the authors approvingly quote one senior General Electric executive as saying. "Don't be afraid to promote stars without specifically relevant experience, seemingly over their heads." Success in the modern economy, according to Michaels, Handfield-Jones, and Axelrod, requires "the talent mind-set": the "deep-seated belief that having better talent at all levels is how you outperform your competitors." This "talent mind-set" is the new orthodoxy of American management. It is the intellectual justification for why such a high premium is placed on degrees from first-tier business schools, and why the compensation packages for top executives have become so lavish. In the modern corporation, the system is considered only as strong as its stars, and, in the past few years, this message has been preached by consultants and management gurus all over the world. None, however, have spread the word quite so ardently as McKinsey, and, of all its clients, one firm took the talent mind-set closest to heart. It was a company where McKinsey conducted twenty separate projects, where McKinsey's billings topped ten million dollars a year, where a McKinsey director regularly attended board meetings, and where the C.E.O. himself was a former McKinsey partner. The company, of course, was Enron. The Enron scandal is now almost a year old. The reputations of Jeffrey Skilling and Kenneth Lay, the company's two top executives, have been destroyed. Arthur Andersen, Enron's auditor, has been driven out of business, and now investigators have turned their attention to Enron's investment bankers. The one Enron partner that has escaped largely unscathed is McKinsey, which is odd, given that it essentially created the blueprint for the Enron culture. Enron was the ultimate "talent" company. When Skilling started the corporate division known as Enron Capital and Trade, in 1990, he "decided to bring in a steady stream of the very best college and M.B.A. graduates he could find to stock the company with talent," Michaels, Handfield-Jones, and Axelrod tell us. During the nineties, Enron was bringing in two hundred and fifty newly minted M.B.A.s a year. "We had these things called Super Saturdays," one former Enron manager recalls. "I'd interview some of these guys who were fresh out of Harvard, and these kids could blow me out of the water. They knew things I'd never heard of." Once at Enron, the top performers were rewarded inordinately, and promoted without regard for seniority or experience. Enron was a star system. "The only thing that differentiates Enron from our competitors is our people, our talent," Lay, Enron's former chairman and C.E.O., told the McKinsey consultants when they came to the company's headquarters, in Houston. Or, as another senior Enron executive put it to Richard Foster, a McKinsey partner who celebrated Enron in his 2001 book, "Creative Destruction," "We hire very smart people and we pay them more than they think they are worth." The management of Enron, in other words, did exactly what the consultants at McKinsey said that companies ought to do in order to succeed in the modern economy. It hired and rewarded the very best and the very brightest—and it is now in bankruptcy. The reasons for its collapse are complex, needless to say. But what if Enron failed not in spite of its talent mind-set but because of it? What if smart people are overrated? 2At the heart of the McKinsey vision is a process that the War for Talent advocates refer to as "differentiation and affirmation." Employers, they argue, need to sit down once or twice a year and hold a "candid, probing, no-holds-barred debate about each individual," sorting employees into A, B, and C groups. The A's must be challenged and disproportionately rewarded. The B's need to be encouraged and affirmed. The C's need to shape up or be shipped out. Enron followed this advice almost to the letter, setting up internal Performance Review Committees. The members got together twice a year, and graded each person in their section on ten separate criteria, using a scale of one to five. The process was called "rank and yank." Those graded at the top of their unit received bonuses two-thirds higher than those in the next thirty per cent; those who ranked at the bottom received no bonuses and no extra stock options—and in some cases were pushed out. How should that ranking be done? Unfortunately, the McKinsey consultants spend very little time discussing the matter. One possibility is simply to hire and reward the smartest people. But the link between, say, I.Q. and job performance is distinctly underwhelming. On a scale where 0.1 or below means virtually no correlation and 0.7 or above implies a strong correlation (your height, for example, has a 0.7 correlation with your parents' height), the correlation between I.Q. and occupational success is between 0.2 and 0.3. "What I.Q. doesn't pick up is effectiveness at common-sense sorts of things, especially working with people," Richard Wagner, a psychologist at Florida State University, says. "In terms of how we evaluate schooling, everything is about working by yourself. If you work with someone else, it's called cheating. Once you get out in the real world, everything you do involves working with other people." Wagner and Robert Sternberg, a psychologist at Yale University, have developed tests of this practical component, which they call "tacit knowledge." Tacit knowledge involves things like knowing how to manage yourself and others, and how to navigate complicated social situations. Here is a question from one of their tests: You have just been promoted to head of an
important department in your organization. The previous head has been
transferred to an equivalent position in a less important department.
Your understanding of the reason for the move is that the performance
of the department as a whole has been mediocre. There have not been any
glaring deficiencies, just a perception of the department as so-so
rather than very good. Your charge is to shape up the department.
Results are expected quickly. Rate the quality of the following
strategies for succeeding at your new position. Wagner finds that how well people do on a test like this predicts how well they will do in the workplace: good managers pick (b) and (e); bad managers tend to pick (c). Yet there's no clear connection between such tacit knowledge and other forms of knowledge and experience. The process of assessing ability in the workplace is a lot messier than it appears. An employer really wants to assess not potential but performance. Yet that's just as tricky. In "The War for Talent," the authors talk about how the Royal Air Force used the A, B, and C ranking system for its pilots during the Battle of Britain. But ranking fighter pilots—for whom there are a limited and relatively objective set of performance criteria (enemy kills, for example, and the ability to get their formations safely home)—is a lot easier than assessing how the manager of a new unit is doing at, say, marketing or business development. And whom do you ask to rate the manager's performance? Studies show that there is very little correlation between how someone's peers rate him and how his boss rates him. The only rigorous way to assess performance, according to human-resources specialists, is to use criteria that are as specific as possible. Managers are supposed to take detailed notes on their employees throughout the year, in order to remove subjective personal reactions from the process of assessment. You can grade someone's performance only if you know their performance. And, in the freewheeling culture of Enron, this was all but impossible. People deemed "talented" were constantly being pushed into new jobs and given new challenges. Annual turnover from promotions was close to twenty per cent. Lynda Clemmons, the so-called "weather babe" who started Enron's weather derivatives business, jumped, in seven quick years, from trader to associate to manager to director and, finally, to head of her own business unit. How do you evaluate someone's performance in a system where no one is in a job long enough to allow such evaluation? The answer is that you end up doing performance evaluations that aren't based on performance. Among the many glowing books about Enron written before its fall was the best-seller "Leading the Revolution," by the management consultant Gary Hamel, which tells the story of Lou Pai, who launched Enron's power-trading business. Pai's group began with a disaster: it lost tens of millions of dollars trying to sell electricity to residential consumers in newly deregulated markets. The problem, Hamel explains, is that the markets weren't truly deregulated: "The states that were opening their markets to competition were still setting rules designed to give their traditional utilities big advantages." It doesn't seem to have occurred to anyone that Pai ought to have looked into those rules more carefully before risking millions of dollars. He was promptly given the chance to build the commercial electricity-outsourcing business, where he ran up several more years of heavy losses before cashing out of Enron last year with two hundred and seventy million dollars. Because Pai had "talent," he was given new opportunities, and when he failed at those new opportunities he was given still more opportunities . . . because he had "talent." "At Enron, failure—even of the type that ends up on the front page of the Wall Street Journal—doesn't necessarily sink a career," Hamel writes, as if that were a good thing. Presumably, companies that want to encourage risk-taking must be willing to tolerate mistakes. Yet if talent is defined as something separate from an employee's actual performance, what use is it, exactly? 3What the War for Talent amounts to is an argument for indulging A employees, for fawning over them. "You need to do everything you can to keep them engaged and satisfied—even delighted," Michaels, Handfield-Jones, and Axelrod write. "Find out what they would most like to be doing, and shape their career and responsibilities in that direction. Solve any issues that might be pushing them out the door, such as a boss that frustrates them or travel demands that burden them." No company was better at this than Enron. In one oft-told story, Louise Kitchin, a twenty-nine-year-old gas trader in Europe, became convinced that the company ought to develop an online-trading business. She told her boss, and she began working in her spare time on the project, until she had two hundred and fifty people throughout Enron helping her. After six months, Skilling was finally informed. "I was never asked for any capital," Skilling said later. "I was never asked for any people. They had already purchased the servers. They had already started ripping apart the building. They had started legal reviews in twenty-two countries by the time I heard about it." It was, Skilling went on approvingly, "exactly the kind of behavior that will continue to drive this company forward." Kitchin's qualification for running EnronOnline, it should be pointed out, was not that she was good at it. It was that she wanted to do it, and Enron was a place where stars did whatever they wanted. "Fluid movement is absolutely necessary in our company. And the type of people we hire enforces that," Skilling told the team from McKinsey. "Not only does this system help the excitement level for each manager, it shapes Enron's business in the direction that its managers find most exciting." Here is Skilling again: "If lots of [employees] are flocking to a new business unit, that's a good sign that the opportunity is a good one. . . . If a business unit can't attract people very easily, that's a good sign that it's a business Enron shouldn't be in." You might expect a C.E.O. to say that if a business unit can't attract customers very easily that's a good sign it's a business the company shouldn't be in. A company's business is supposed to be shaped in the direction that its managers find most profitable. But at Enron the needs of the customers and the shareholders were secondary to the needs of its stars. A dozen years ago, the psychologists Robert Hogan, Robert Raskin, and Dan Fazzini wrote a brilliant essay called "The Dark Side of Charisma." It argued that flawed managers fall into three types. One is the High Likability Floater, who rises effortlessly in an organization because he never takes any difficult decisions or makes any enemies. Another is the Homme de Ressentiment, who seethes below the surface and plots against his enemies. The most interesting of the three is the Narcissist, whose energy and self-confidence and charm lead him inexorably up the corporate ladder. Narcissists are terrible managers. They resist accepting suggestions, thinking it will make them appear weak, and they don't believe that others have anything useful to tell them. "Narcissists are biased to take more credit for success than is legitimate," Hogan and his co-authors write, and "biased to avoid acknowledging responsibility for their failures and shortcomings for the same reasons that they claim more success than is their due." Moreover: Narcissists typically make judgments with
greater confidence than other people . . . and, because their judgments
are rendered with such conviction, other people tend to believe them
and the narcissists become disproportionately more influential in group
situations. Finally, because of their self-confidence and strong need
for recognition, narcissists tend to "self-nominate"; consequently,
when a leadership gap appears in a group or organization, the
narcissists rush to fill it. Tyco Corporation and WorldCom were the Greedy Corporations: they were purely interested in short-term financial gain. Enron was the Narcissistic Corporation—a company that took more credit for success than was legitimate, that did not acknowledge responsibility for its failures, that shrewdly sold the rest of us on its genius, and that substituted self-nomination for disciplined management. At one point in "Leading the Revolution," Hamel tracks down a senior Enron executive, and what he breathlessly recounts—the braggadocio, the self-satisfaction—could be an epitaph for the talent mind-set: "You cannot control the atoms within a
nuclear fusion reaction," said Ken Rice when he was head of Enron
Capital and Trade Resources (ECT), America's largest marketer of
natural gas and largest buyer and seller of electricity. Adorned in a
black T-shirt, blue jeans, and cowboy boots, Rice drew a box on an
office whiteboard that pictured his business unit as a nuclear reactor.
Little circles in the box represented its "contract originators," the
gunslingers charged with doing deals and creating new businesses.
Attached to each circle was an arrow. In Rice's diagram the arrows were
pointing in all different directions. "We allow people to go in
whichever direction that they want to go." The distinction between the Greedy Corporation and the Narcissistic Corporation matters, because the way we conceive our attainments helps determine how we behave. Carol Dweck, a psychologist at Columbia University, has found that people generally hold one of two fairly firm beliefs about their intelligence: they consider it either a fixed trait or something that is malleable and can be developed over time. Five years ago, Dweck did a study at the University of Hong Kong, where all classes are conducted in English. She and her colleagues approached a large group of social-sciences students, told them their English-proficiency scores, and asked them if they wanted to take a course to improve their language skills. One would expect all those who scored poorly to sign up for the remedial course. The University of Hong Kong is a demanding institution, and it is hard to do well in the social sciences without strong English skills. Curiously, however, only the ones who believed in malleable intelligence expressed interest in the class. The students who believed that their intelligence was a fixed trait were so concerned about appearing to be deficient that they preferred to stay home. "Students who hold a fixed view of their intelligence care so much about looking smart that they act dumb," Dweck writes, "for what could be dumber than giving up a chance to learn something that is essential for your own success?" In a similar experiment, Dweck gave a class of preadolescent students a test filled with challenging problems. After they were finished, one group was praised for its effort and another group was praised for its intelligence. Those praised for their intelligence were reluctant to tackle difficult tasks, and their performance on subsequent tests soon began to suffer. Then Dweck asked the children to write a letter to students at another school, describing their experience in the study. She discovered something remarkable: forty per cent of those students who were praised for their intelligence lied about how they had scored on the test, adjusting their grade upward. They weren't naturally deceptive people, and they weren't any less intelligent or self-confident than anyone else. They simply did what people do when they are immersed in an environment that celebrates them solely for their innate "talent." They begin to define themselves by that description, and when times get tough and that self-image is threatened they have difficulty with the consequences. They will not take the remedial course. They will not stand up to investors and the public and admit that they were wrong. They'd sooner lie. 4The broader failing of McKinsey and its acolytes at Enron is their assumption that an organization's intelligence is simply a function of the intelligence of its employees. They believe in stars, because they don't believe in systems. In a way, that's understandable, because our lives are so obviously enriched by individual brilliance. Groups don't write great novels, and a committee didn't come up with the theory of relativity. But companies work by different rules. They don't just create; they execute and compete and coördinate the efforts of many different people, and the organizations that are most successful at that task are the ones where the system is the star. There is a wonderful example of this in the story of the so-called Eastern Pearl Harbor, of the Second World War. During the first nine months of 1942, the United States Navy suffered a catastrophe. German U-boats, operating just off the Atlantic coast and in the Caribbean, were sinking our merchant ships almost at will. U-boat captains marvelled at their good fortune. "Before this sea of light, against this footlight glare of a carefree new world were passing the silhouettes of ships recognizable in every detail and sharp as the outlines in a sales catalogue," one U-boat commander wrote. "All we had to do was press the button." What made this such a puzzle is that, on the other side of the Atlantic, the British had much less trouble defending their ships against U-boat attacks. The British, furthermore, eagerly passed on to the Americans everything they knew about sonar and depth-charge throwers and the construction of destroyers. And still the Germans managed to paralyze America's coastal zones. You can imagine what the consultants at McKinsey would have concluded: they would have said that the Navy did not have a talent mind-set, that President Roosevelt needed to recruit and promote top performers into key positions in the Atlantic command. In fact, he had already done that. At the beginning of the war, he had pushed out the solid and unspectacular Admiral Harold R. Stark as Chief of Naval Operations and replaced him with the legendary Ernest Joseph King. "He was a supreme realist with the arrogance of genius," Ladislas Farago writes in "The Tenth Fleet," a history of the Navy's U-boat battles in the Second World War. "He had unbounded faith in himself, in his vast knowledge of naval matters and in the soundness of his ideas. Unlike Stark, who tolerated incompetence all around him, King had no patience with fools." The Navy had plenty of talent at the top, in other words. What it didn't have was the right kind of organization. As Eliot A. Cohen, a scholar of military strategy at Johns Hopkins, writes in his brilliant book "Military Misfortunes in the Atlantic": To wage the antisubmarine war well,
analysts had to bring together fragments of information,
direction-finding fixes, visual sightings, decrypts, and the "flaming
datum" of a U-boat attack—for use by a commander to coordinate the
efforts of warships, aircraft, and convoy commanders. Such synthesis
had to occur in near "real time"—within hours, even minutes in some
cases. The British excelled at the task because they had a centralized operational system. The controllers moved the British ships around the Atlantic like chess pieces, in order to outsmart U-boat "wolf packs." By contrast, Admiral King believed strongly in a decentralized management structure: he held that managers should never tell their subordinates " 'how' as well as what to 'do.' " In today's jargon, we would say he was a believer in "loose-tight" management, of the kind celebrated by the McKinsey consultants Thomas J. Peters and Robert H. Waterman in their 1982 best-seller, "In Search of Excellence." But "loose-tight" doesn't help you find U-boats. Throughout most of 1942, the Navy kept trying to act smart by relying on technical know-how, and stubbornly refused to take operational lessons from the British. The Navy also lacked the organizational structure necessary to apply the technical knowledge it did have to the field. Only when the Navy set up the Tenth Fleet—a single unit to coördinate all anti-submarine warfare in the Atlantic—did the situation change. In the year and a half before the Tenth Fleet was formed, in May of 1943, the Navy sank thirty-six U-boats. In the six months afterward, it sank seventy-five. "The creation of the Tenth Fleet did not bring more talented individuals into the field of ASW"—anti-submarine warfare—"than had previous organizations," Cohen writes. "What Tenth Fleet did allow, by virtue of its organization and mandate, was for these individuals to become far more effective than previously." The talent myth assumes that people make organizations smart. More often than not, it's the other way around. 5There is ample evidence of this principle among America's most successful companies. Southwest Airlines hires very few M.B.A.s, pays its managers modestly, and gives raises according to seniority, not "rank and yank." Yet it is by far the most successful of all United States airlines, because it has created a vastly more efficient organization than its competitors have. At Southwest, the time it takes to get a plane that has just landed ready for takeoff—a key index of productivity—is, on average, twenty minutes, and requires a ground crew of four, and two people at the gate. (At United Airlines, by contrast, turnaround time is closer to thirty-five minutes, and requires a ground crew of twelve and three agents at the gate.) In the case of the giant retailer Wal-Mart, one of the most critical periods in its history came in 1976, when Sam Walton "unretired," pushing out his handpicked successor, Ron Mayer. Mayer was just over forty. He was ambitious. He was charismatic. He was, in the words of one Walton biographer, "the boy-genius financial officer." But Walton was convinced that Mayer was, as people at McKinsey would say, "differentiating and affirming" in the corporate suite, in defiance of Wal-Mart's inclusive culture. Mayer left, and Wal-Mart survived. After all, Wal-Mart is an organization, not an all-star team. Walton brought in David Glass, late of the Army and Southern Missouri State University, as C.E.O.; the company is now ranked No. 1 on the Fortune 500 list. Procter & Gamble doesn't have a star system, either. How could it? Would the top M.B.A. graduates of Harvard and Stanford move to Cincinnati to work on detergent when they could make three times as much reinventing the world in Houston? Procter & Gamble isn't glamorous. Its C.E.O. is a lifer—a former Navy officer who began his corporate career as an assistant brand manager for Joy dishwashing liquid—and, if Procter & Gamble's best played Enron's best at Trivial Pursuit, no doubt the team from Houston would win handily. But Procter & Gamble has dominated the consumer-products field for close to a century, because it has a carefully conceived managerial system, and a rigorous marketing methodology that has allowed it to win battles for brands like Crest and Tide decade after decade. In Procter & Gamble's Navy, Admiral Stark would have stayed. But a cross-divisional management committee would have set the Tenth Fleet in place before the war ever started. 6Among the most damning facts about Enron, in the end, was something its managers were proudest of. They had what, in McKinsey terminology, is called an "open market" for hiring. In the open-market system—McKinsey's assault on the very idea of a fixed organization—anyone could apply for any job that he or she wanted, and no manager was allowed to hold anyone back. Poaching was encouraged. When an Enron executive named Kevin Hannon started the company's global broadband unit, he launched what he called Project Quick Hire. A hundred top performers from around the company were invited to the Houston Hyatt to hear Hannon give his pitch. Recruiting booths were set up outside the meeting room. "Hannon had his fifty top performers for the broadband unit by the end of the week," Michaels, Handfield-Jones, and Axelrod write, "and his peers had fifty holes to fill." Nobody, not even the consultants who were paid to think about the Enron culture, seemed worried that those fifty holes might disrupt the functioning of the affected departments, that stability in a firm's existing businesses might be a good thing, that the self-fulfillment of Enron's star employees might possibly be in conflict with the best interests of the firm as a whole. These are the sort of concerns that management consultants ought to raise. But Enron's management consultant was McKinsey, and McKinsey was as much a prisoner of the talent myth as its clients were. In 1998, Enron hired ten Wharton M.B.A.s; that same year, McKinsey hired forty. In 1999, Enron hired twelve from Wharton; McKinsey hired sixty-one. The consultants at McKinsey were preaching at Enron what they believed about themselves. "When we would hire them, it wouldn't just be for a week," one former Enron manager recalls, of the brilliant young men and women from McKinsey who wandered the hallways at the company's headquarters. "It would be for two to four months. They were always around." They were there looking for people who had the talent to think outside the box. It never occurred to them that, if everyone had to think outside the box, maybe it was the box that needed fixing. |
Copyright 2002, Malcolm Gladwell |
. The
Disappearing Middle
When I was a teenager growing up on Long Island, one of my favorite
excursions was a trip to see the great Gilded Age mansions of the North
Shore. Those mansions weren't just pieces of architectural history.
They were monuments to a bygone social era, one in which the rich could
afford the armies of servants needed to maintain a house the size of a
European palace. By the time I saw them, of course, that era was long
past. Almost none of the Long Island mansions were still private
residences. Those that hadn't been turned into museums were occupied by
nursing homes or private schools.
For the America I grew up in -- the America of the 1950's and 1960's -- was a middle-class society, both in reality and in feel. The vast income and wealth inequalities of the Gilded Age had disappeared. Yes, of course, there was the poverty of the underclass -- but the conventional wisdom of the time viewed that as a social rather than an economic problem. Yes, of course, some wealthy businessmen and heirs to large fortunes lived far better than the average American. But they weren't rich the way the robber barons who built the mansions had been rich, and there weren't that many of them. The days when plutocrats were a force to be reckoned with in American society, economically or politically, seemed long past.
Daily experience confirmed the sense of a fairly equal society. The economic disparities you were conscious of were quite muted. Highly educated professionals -- middle managers, college teachers, even lawyers -- often claimed that they earned less than unionized blue-collar workers. Those considered very well off lived in split-levels, had a housecleaner come in once a week and took summer vacations in Europe. But they sent their kids to public schools and drove themselves to work, just like everyone else.
But that was long ago. The middle-class America of my youth was another country.
We are now living in a new Gilded Age, as extravagant as the original. Mansions have made a comeback. Back in 1999 this magazine profiled Thierry Despont, the ''eminence of excess,'' an architect who specializes in designing houses for the superrich. His creations typically range from 20,000 to 60,000 square feet; houses at the upper end of his range are not much smaller than the White House. Needless to say, the armies of servants are back, too. So are the yachts. Still, even J.P. Morgan didn't have a Gulfstream.
As the story about Despont suggests, it's not fair to say that the fact of widening inequality in America has gone unreported. Yet glimpses of the lifestyles of the rich and tasteless don't necessarily add up in people's minds to a clear picture of the tectonic shifts that have taken place in the distribution of income and wealth in this country. My sense is that few people are aware of just how much the gap between the very rich and the rest has widened over a relatively short period of time. In fact, even bringing up the subject exposes you to charges of ''class warfare,'' the ''politics of envy'' and so on. And very few people indeed are willing to talk about the profound effects -- economic, social and political -- of that widening gap.
Yet you can't understand what's happening in America today without understanding the extent, causes and consequences of the vast increase in inequality that has taken place over the last three decades, and in particular the astonishing concentration of income and wealth in just a few hands. To make sense of the current wave of corporate scandal, you need to understand how the man in the gray flannel suit has been replaced by the imperial C.E.O. The concentration of income at the top is a key reason that the United States, for all its economic achievements, has more poverty and lower life expectancy than any other major advanced nation. Above all, the growing concentration of wealth has reshaped our political system: it is at the root both of a general shift to the right and of an extreme polarization of our politics.
But before we get to all that, let's take a look at who gets what.
II. The New Gilded Age
he Securities and Exchange
Commission hath no fury like a woman scorned. The messy divorce
proceedings of Jack Welch, the legendary former C.E.O. of
Is it news that C.E.O.'s of large American corporations make a lot of money? Actually, it is. They were always well paid compared with the average worker, but there is simply no comparison between what executives got a generation ago and what they are paid today.
Over the past 30 years most people have seen only modest salary increases: the average annual salary in America, expressed in 1998 dollars (that is, adjusted for inflation), rose from $32,522 in 1970 to $35,864 in 1999. That's about a 10 percent increase over 29 years -- progress, but not much. Over the same period, however, according to Fortune magazine, the average real annual compensation of the top 100 C.E.O.'s went from $1.3 million -- 39 times the pay of an average worker -- to $37.5 million, more than 1,000 times the pay of ordinary workers.
The explosion in C.E.O. pay over the past 30 years is an amazing story in its own right, and an important one. But it is only the most spectacular indicator of a broader story, the reconcentration of income and wealth in the U.S. The rich have always been different from you and me, but they are far more different now than they were not long ago -- indeed, they are as different now as they were when F. Scott Fitzgerald made his famous remark.
That's a controversial statement, though it shouldn't be. For at least the past 15 years it has been hard to deny the evidence for growing inequality in the United States. Census data clearly show a rising share of income going to the top 20 percent of families, and within that top 20 percent to the top 5 percent, with a declining share going to families in the middle. Nonetheless, denial of that evidence is a sizable, well-financed industry. Conservative think tanks have produced scores of studies that try to discredit the data, the methodology and, not least, the motives of those who report the obvious. Studies that appear to refute claims of increasing inequality receive prominent endorsements on editorial pages and are eagerly cited by right-leaning government officials. Four years ago Alan Greenspan (why did anyone ever think that he was nonpartisan?) gave a keynote speech at the Federal Reserve's annual Jackson Hole conference that amounted to an attempt to deny that there has been any real increase in inequality in America.
The concerted effort to deny that inequality is increasing is itself a symptom of the growing influence of our emerging plutocracy (more on this later). So is the fierce defense of the backup position, that inequality doesn't matter -- or maybe even that, to use Martha Stewart's signature phrase, it's a good thing. Meanwhile, politically motivated smoke screens aside, the reality of increasing inequality is not in doubt. In fact, the census data understate the case, because for technical reasons those data tend to undercount very high incomes -- for example, it's unlikely that they reflect the explosion in C.E.O. compensation. And other evidence makes it clear not only that inequality is increasing but that the action gets bigger the closer you get to the top. That is, it's not simply that the top 20 percent of families have had bigger percentage gains than families near the middle: the top 5 percent have done better than the next 15, the top 1 percent better than the next 4, and so on up to Bill Gates.
Studies that try to do a better job of tracking high incomes have found startling results. For example, a recent study by the nonpartisan Congressional Budget Office used income tax data and other sources to improve on the census estimates. The C.B.O. study found that between 1979 and 1997, the after-tax incomes of the top 1 percent of families rose 157 percent, compared with only a 10 percent gain for families near the middle of the income distribution. Even more startling results come from a new study by Thomas Piketty, at the French research institute Cepremap, and Emmanuel Saez, who is now at the University of California at Berkeley. Using income tax data, Piketty and Saez have produced estimates of the incomes of the well-to-do, the rich and the very rich back to 1913.
The first point you learn from these new estimates is that the middle-class America of my youth is best thought of not as the normal state of our society, but as an interregnum between Gilded Ages. America before 1930 was a society in which a small number of very rich people controlled a large share of the nation's wealth. We became a middle-class society only after the concentration of income at the top dropped sharply during the New Deal, and especially during World War II. The economic historians Claudia Goldin and Robert Margo have dubbed the narrowing of income gaps during those years the Great Compression. Incomes then stayed fairly equally distributed until the 1970's: the rapid rise in incomes during the first postwar generation was very evenly spread across the population.
Since the 1970's, however, income gaps have been rapidly widening. Piketty and Saez confirm what I suspected: by most measures we are, in fact, back to the days of ''The Great Gatsby.'' After 30 years in which the income shares of the top 10 percent of taxpayers, the top 1 percent and so on were far below their levels in the 1920's, all are very nearly back where they were.
And the big winners are the very, very rich. One ploy often used to play down growing inequality is to rely on rather coarse statistical breakdowns -- dividing the population into five ''quintiles,'' each containing 20 percent of families, or at most 10 ''deciles.'' Indeed, Greenspan's speech at Jackson Hole relied mainly on decile data. From there it's a short step to denying that we're really talking about the rich at all. For example, a conservative commentator might concede, grudgingly, that there has been some increase in the share of national income going to the top 10 percent of taxpayers, but then point out that anyone with an income over $81,000 is in that top 10 percent. So we're just talking about shifts within the middle class, right?
Wrong: the top 10 percent contains a lot of people whom we would still consider middle class, but they weren't the big winners. Most of the gains in the share of the top 10 percent of taxpayers over the past 30 years were actually gains to the top 1 percent, rather than the next 9 percent. In 1998 the top 1 percent started at $230,000. In turn, 60 percent of the gains of that top 1 percent went to the top 0.1 percent, those with incomes of more than $790,000. And almost half of those gains went to a mere 13,000 taxpayers, the top 0.01 percent, who had an income of at least $3.6 million and an average income of $17 million.
A stickler for detail might point out that the Piketty-Saez estimates end in 1998 and that the C.B.O. numbers end a year earlier. Have the trends shown in the data reversed? Almost surely not. In fact, all indications are that the explosion of incomes at the top continued through 2000. Since then the plunge in stock prices must have put some crimp in high incomes -- but census data show inequality continuing to increase in 2001, mainly because of the severe effects of the recession on the working poor and near poor. When the recession ends, we can be sure that we will find ourselves a society in which income inequality is even higher than it was in the late 90's.
So claims that we've entered a second Gilded Age aren't exaggerated. In America's middle-class era, the mansion-building, yacht-owning classes had pretty much disappeared. According to Piketty and Saez, in 1970 the top 0.01 percent of taxpayers had 0.7 percent of total income -- that is, they earned ''only'' 70 times as much as the average, not enough to buy or maintain a mega-residence. But in 1998 the top 0.01 percent received more than 3 percent of all income. That meant that the 13,000 richest families in America had almost as much income as the 20 million poorest households; those 13,000 families had incomes 300 times that of average families.
And let me repeat: this transformation has happened very quickly, and it is still going on. You might think that 1987, the year Tom Wolfe published his novel ''The Bonfire of the Vanities'' and Oliver Stone released his movie ''Wall Street,'' marked the high tide of America's new money culture. But in 1987 the top 0.01 percent earned only about 40 percent of what they do today, and top executives less than a fifth as much. The America of ''Wall Street'' and ''The Bonfire of the Vanities'' was positively egalitarian compared with the country we live in today.
III. Undoing the New Deal
In the middle of the 1980's, as economists became aware that something
important was happening to the distribution of income in America, they
formulated three main hypotheses about its causes.
The ''globalization'' hypothesis tied America's changing income distribution to the growth of world trade, and especially the growing imports of manufactured goods from the third world. Its basic message was that blue-collar workers -- the sort of people who in my youth often made as much money as college-educated middle managers -- were losing ground in the face of competition from low-wage workers in Asia. A result was stagnation or decline in the wages of ordinary people, with a growing share of national income going to the highly educated.
A second hypothesis, ''skill-biased technological change,'' situated the cause of growing inequality not in foreign trade but in domestic innovation. The torrid pace of progress in information technology, so the story went, had increased the demand for the highly skilled and educated. And so the income distribution increasingly favored brains rather than brawn.
Finally, the ''superstar'' hypothesis -- named by the Chicago economist Sherwin Rosen -- offered a variant on the technological story. It argued that modern technologies of communication often turn competition into a tournament in which the winner is richly rewarded, while the runners-up get far less. The classic example -- which gives the theory its name -- is the entertainment business. As Rosen pointed out, in bygone days there were hundreds of comedians making a modest living at live shows in the borscht belt and other places. Now they are mostly gone; what is left is a handful of superstar TV comedians.
The debates among these hypotheses -- particularly the debate between those who attributed growing inequality to globalization and those who attributed it to technology -- were many and bitter. I was a participant in those debates myself. But I won't dwell on them, because in the last few years there has been a growing sense among economists that none of these hypotheses work.
I don't mean to say that there was nothing to these stories. Yet as more evidence has accumulated, each of the hypotheses has seemed increasingly inadequate. Globalization can explain part of the relative decline in blue-collar wages, but it can't explain the 2,500 percent rise in C.E.O. incomes. Technology may explain why the salary premium associated with a college education has risen, but it's hard to match up with the huge increase in inequality among the college-educated, with little progress for many but gigantic gains at the top. The superstar theory works for Jay Leno, but not for the thousands of people who have become awesomely rich without going on TV.
The Great Compression -- the substantial reduction in inequality during the New Deal and the Second World War -- also seems hard to understand in terms of the usual theories. During World War II Franklin Roosevelt used government control over wages to compress wage gaps. But if the middle-class society that emerged from the war was an artificial creation, why did it persist for another 30 years?
Some -- by no means all -- economists trying to understand growing inequality have begun to take seriously a hypothesis that would have been considered irredeemably fuzzy-minded not long ago. This view stresses the role of social norms in setting limits to inequality. According to this view, the New Deal had a more profound impact on American society than even its most ardent admirers have suggested: it imposed norms of relative equality in pay that persisted for more than 30 years, creating the broadly middle-class society we came to take for granted. But those norms began to unravel in the 1970's and have done so at an accelerating pace.
Exhibit A for this view is the story of executive compensation. In the 1960's, America's great corporations behaved more like socialist republics than like cutthroat capitalist enterprises, and top executives behaved more like public-spirited bureaucrats than like captains of industry. I'm not exaggerating. Consider the description of executive behavior offered by John Kenneth Galbraith in his 1967 book, ''The New Industrial State'': ''Management does not go out ruthlessly to reward itself -- a sound management is expected to exercise restraint.'' Managerial self-dealing was a thing of the past: ''With the power of decision goes opportunity for making money. . . . Were everyone to seek to do so . . . the corporation would be a chaos of competitive avarice. But these are not the sort of thing that a good company man does; a remarkably effective code bans such behavior. Group decision-making insures, moreover, that almost everyone's actions and even thoughts are known to others. This acts to enforce the code and, more than incidentally, a high standard of personal honesty as well.''
Thirty-five years on, a cover article in Fortune is titled ''You Bought. They Sold.'' ''All over corporate America,'' reads the blurb, ''top execs were cashing in stocks even as their companies were tanking. Who was left holding the bag? You.'' As I said, we've become a different country.
Let's leave actual malfeasance on one side for a moment, and ask how the relatively modest salaries of top executives 30 years ago became the gigantic pay packages of today. There are two main stories, both of which emphasize changing norms rather than pure economics. The more optimistic story draws an analogy between the explosion of C.E.O. pay and the explosion of baseball salaries with the introduction of free agency. According to this story, highly paid C.E.O.'s really are worth it, because having the right man in that job makes a huge difference. The more pessimistic view -- which I find more plausible -- is that competition for talent is a minor factor. Yes, a great executive can make a big difference -- but those huge pay packages have been going as often as not to executives whose performance is mediocre at best. The key reason executives are paid so much now is that they appoint the members of the corporate board that determines their compensation and control many of the perks that board members count on. So it's not the invisible hand of the market that leads to those monumental executive incomes; it's the invisible handshake in the boardroom.
But then why weren't executives paid lavishly 30 years ago? Again, it's a matter of corporate culture. For a generation after World War II, fear of outrage kept executive salaries in check. Now the outrage is gone. That is, the explosion of executive pay represents a social change rather than the purely economic forces of supply and demand. We should think of it not as a market trend like the rising value of waterfront property, but as something more like the sexual revolution of the 1960's -- a relaxation of old strictures, a new permissiveness, but in this case the permissiveness is financial rather than sexual. Sure enough, John Kenneth Galbraith described the honest executive of 1967 as being one who ''eschews the lovely, available and even naked woman by whom he is intimately surrounded.'' By the end of the 1990's, the executive motto might as well have been ''If it feels good, do it.''
How did this change in corporate culture happen? Economists and management theorists are only beginning to explore that question, but it's easy to suggest a few factors. One was the changing structure of financial markets. In his new book, ''Searching for a Corporate Savior,'' Rakesh Khurana of Harvard Business School suggests that during the 1980's and 1990's, ''managerial capitalism'' -- the world of the man in the gray flannel suit -- was replaced by ''investor capitalism.'' Institutional investors weren't willing to let a C.E.O. choose his own successor from inside the corporation; they wanted heroic leaders, often outsiders, and were willing to pay immense sums to get them. The subtitle of Khurana's book, by the way, is ''The Irrational Quest for Charismatic C.E.O.'s.''
But fashionable management theorists didn't think it was irrational. Since the 1980's there has been ever more emphasis on the importance of ''leadership'' -- meaning personal, charismatic leadership. When Lee Iacocca of Chrysler became a business celebrity in the early 1980's, he was practically alone: Khurana reports that in 1980 only one issue of Business Week featured a C.E.O. on its cover. By 1999 the number was up to 19. And once it was considered normal, even necessary, for a C.E.O. to be famous, it also became easier to make him rich.
Economists also did their bit to legitimize previously unthinkable levels of executive pay. During the 1980's and 1990's a torrent of academic papers -- popularized in business magazines and incorporated into consultants' recommendations -- argued that Gordon Gekko was right: greed is good; greed works. In order to get the best performance out of executives, these papers argued, it was necessary to align their interests with those of stockholders. And the way to do that was with large grants of stock or stock options.
It's hard to escape the suspicion that these new intellectual justifications for soaring executive pay were as much effect as cause. I'm not suggesting that management theorists and economists were personally corrupt. It would have been a subtle, unconscious process: the ideas that were taken up by business schools, that led to nice speaking and consulting fees, tended to be the ones that ratified an existing trend, and thereby gave it legitimacy.
What economists like Piketty and Saez are now suggesting is that the story of executive compensation is representative of a broader story. Much more than economists and free-market advocates like to imagine, wages -- particularly at the top -- are determined by social norms. What happened during the 1930's and 1940's was that new norms of equality were established, largely through the political process. What happened in the 1980's and 1990's was that those norms unraveled, replaced by an ethos of ''anything goes.'' And a result was an explosion of income at the top of the scale.
IV. The Price of Inequality
It was one of those revealing moments. Responding to an e-mail message
from a Canadian viewer, Robert Novak of ''Crossfire'' delivered a
little speech: ''Marg, like most Canadians, you're ill informed and
wrong. The U.S. has the longest standard of living -- longest life
expectancy of any country in the world, including Canada. That's the
truth.''
But it was Novak who had his facts wrong. Canadians can expect to live about two years longer than Americans. In fact, life expectancy in the U.S. is well below that in Canada, Japan and every major nation in Western Europe. On average, we can expect lives a bit shorter than those of Greeks, a bit longer than those of Portuguese. Male life expectancy is lower in the U.S. than it is in Costa Rica.
Still, you can understand why Novak assumed that we were No. 1. After all, we really are the richest major nation, with real G.D.P. per capita about 20 percent higher than Canada's. And it has been an article of faith in this country that a rising tide lifts all boats. Doesn't our high and rising national wealth translate into a high standard of living -- including good medical care -- for all Americans?
Well, no. Although America has higher per capita income than other advanced countries, it turns out that that's mainly because our rich are much richer. And here's a radical thought: if the rich get more, that leaves less for everyone else.
That statement -- which is simply a matter of arithmetic -- is guaranteed to bring accusations of ''class warfare.'' If the accuser gets more specific, he'll probably offer two reasons that it's foolish to make a fuss over the high incomes of a few people at the top of the income distribution. First, he'll tell you that what the elite get may look like a lot of money, but it's still a small share of the total -- that is, when all is said and done the rich aren't getting that big a piece of the pie. Second, he'll tell you that trying to do anything to reduce incomes at the top will hurt, not help, people further down the distribution, because attempts to redistribute income damage incentives.
These arguments for lack of concern are plausible. And they were entirely correct, once upon a time -- namely, back when we had a middle-class society. But there's a lot less truth to them now.
First, the share of the rich in total income is no longer trivial. These days 1 percent of families receive about 16 percent of total pretax income, and have about 14 percent of after-tax income. That share has roughly doubled over the past 30 years, and is now about as large as the share of the bottom 40 percent of the population. That's a big shift of income to the top; as a matter of pure arithmetic, it must mean that the incomes of less well off families grew considerably more slowly than average income. And they did. Adjusting for inflation, average family income -- total income divided by the number of families -- grew 28 percent from 1979 to 1997. But median family income -- the income of a family in the middle of the distribution, a better indicator of how typical American families are doing -- grew only 10 percent. And the incomes of the bottom fifth of families actually fell slightly.
Let me belabor this point for a bit. We pride ourselves, with
considerable justification, on our record of economic growth. But over
the last few decades it's remarkable how little of that growth has
trickled down to ordinary families. Median family income has risen only
about 0.5 percent per year -- and as far as we can tell from somewhat
unreliable data, just about all of that increase was due to wives
working longer hours, with little or no gain in real wages.
Furthermore, numbers about income don't reflect the growing riskiness
of life for ordinary workers. In the days when
Still, many people will say that while the U.S. economic system may generate a lot of inequality, it also generates much higher incomes than any alternative, so that everyone is better off. That was the moral Business Week tried to convey in its recent special issue with ''25 Ideas for a Changing World.'' One of those ideas was ''the rich get richer, and that's O.K.'' High incomes at the top, the conventional wisdom declares, are the result of a free-market system that provides huge incentives for performance. And the system delivers that performance, which means that wealth at the top doesn't come at the expense of the rest of us.
A skeptic might point out that the explosion in executive
compensation seems at best loosely related to actual performance. Jack
Welch was one of the 10 highest-paid executives in the United States in
2000, and you could argue that he earned it. But did Dennis Kozlowski
of
But can we produce any direct evidence about the effects of inequality? We can't rerun our own history and ask what would have happened if the social norms of middle-class America had continued to limit incomes at the top, and if government policy had leaned against rising inequality instead of reinforcing it, which is what actually happened. But we can compare ourselves with other advanced countries. And the results are somewhat surprising.
Many Americans assume that because we are the richest country in the world, with real G.D.P. per capita higher than that of other major advanced countries, Americans must be better off across the board -- that it's not just our rich who are richer than their counterparts abroad, but that the typical American family is much better off than the typical family elsewhere, and that even our poor are well off by foreign standards.
But it's not true. Let me use the example of Sweden, that great conservative bete noire.
A few months ago the conservative cyberpundit Glenn Reynolds made a splash when he pointed out that Sweden's G.D.P. per capita is roughly comparable with that of Mississippi -- see, those foolish believers in the welfare state have impoverished themselves! Presumably he assumed that this means that the typical Swede is as poor as the typical resident of Mississippi, and therefore much worse off than the typical American.
But life expectancy in Sweden is about three years higher than that of the U.S. Infant mortality is half the U.S. level, and less than a third the rate in Mississippi. Functional illiteracy is much less common than in the U.S.
How is this possible? One answer is that G.D.P. per capita is in some ways a misleading measure. Swedes take longer vacations than Americans, so they work fewer hours per year. That's a choice, not a failure of economic performance. Real G.D.P. per hour worked is 16 percent lower than in the United States, which makes Swedish productivity about the same as Canada's.
But the main point is that though Sweden may have lower average income than the United States, that's mainly because our rich are so much richer. The median Swedish family has a standard of living roughly comparable with that of the median U.S. family: wages are if anything higher in Sweden, and a higher tax burden is offset by public provision of health care and generally better public services. And as you move further down the income distribution, Swedish living standards are way ahead of those in the U.S. Swedish families with children that are at the 10th percentile -- poorer than 90 percent of the population -- have incomes 60 percent higher than their U.S. counterparts. And very few people in Sweden experience the deep poverty that is all too common in the United States. One measure: in 1994 only 6 percent of Swedes lived on less than $11 per day, compared with 14 percent in the U.S.
The moral of this comparison is that even if you think that America's high levels of inequality are the price of our high level of national income, it's not at all clear that this price is worth paying. The reason conservatives engage in bouts of Sweden-bashing is that they want to convince us that there is no tradeoff between economic efficiency and equity -- that if you try to take from the rich and give to the poor, you actually make everyone worse off. But the comparison between the U.S. and other advanced countries doesn't support this conclusion at all. Yes, we are the richest major nation. But because so much of our national income is concentrated in relatively few hands, large numbers of Americans are worse off economically than their counterparts in other advanced countries.
And we might even offer a challenge from the other side: inequality in the United States has arguably reached levels where it is counterproductive. That is, you can make a case that our society would be richer if its richest members didn't get quite so much.
I could make this argument on historical grounds. The most impressive economic growth in U.S. history coincided with the middle-class interregnum, the post-World War II generation, when incomes were most evenly distributed. But let's focus on a specific case, the extraordinary pay packages of today's top executives. Are these good for the economy?
Until recently it was almost unchallenged conventional wisdom that, whatever else you might say, the new imperial C.E.O.'s had delivered results that dwarfed the expense of their compensation. But now that the stock bubble has burst, it has become increasingly clear that there was a price to those big pay packages, after all. In fact, the price paid by shareholders and society at large may have been many times larger than the amount actually paid to the executives.
It's easy to get boggled by the details of corporate scandal -- insider loans, stock options, special-purpose entities, mark-to-market, round-tripping. But there's a simple reason that the details are so complicated. All of these schemes were designed to benefit corporate insiders -- to inflate the pay of the C.E.O. and his inner circle. That is, they were all about the ''chaos of competitive avarice'' that, according to John Kenneth Galbraith, had been ruled out in the corporation of the 1960's. But while all restraint has vanished within the American corporation, the outside world -- including stockholders -- is still prudish, and open looting by executives is still not acceptable. So the looting has to be camouflaged, taking place through complicated schemes that can be rationalized to outsiders as clever corporate strategies.
Economists who study crime tell us that crime is inefficient -- that is, the costs of crime to the economy are much larger than the amount stolen. Crime, and the fear of crime, divert resources away from productive uses: criminals spend their time stealing rather than producing, and potential victims spend time and money trying to protect their property. Also, the things people do to avoid becoming victims -- like avoiding dangerous districts -- have a cost even if they succeed in averting an actual crime.
The same holds true of corporate malfeasance, whether or not it
actually involves breaking the law. Executives who devote their time to
creating innovative ways to divert shareholder money into their own
pockets probably aren't running the real business very well (think
The argument for a system in which some people get very rich has always been that the lure of wealth provides powerful incentives. But the question is, incentives to do what? As we learn more about what has actually been going on in corporate America, it's becoming less and less clear whether those incentives have actually made executives work on behalf of the rest of us.
V. Inequality and Politics
n September the Senate debated a
proposed measure that would impose a one-time capital gains tax on
Americans who renounce their citizenship in order to avoid paying U.S.
taxes. Senator Phil Gramm was not pleased, declaring that the proposal
was ''right out of Nazi Germany.'' Pretty strong language, but no
stronger than the metaphor Daniel Mitchell of the Heritage Foundation
used, in an op-ed article in The Washington Times, to describe a bill
designed to prevent corporations from rechartering abroad for tax
purposes: Mitchell described this legislation as the ''Dred Scott tax
bill,'' referring to the infamous 1857 Supreme Court ruling that
required free states to return escaped slaves.
Twenty years ago, would a prominent senator have likened those who want wealthy people to pay taxes to Nazis? Would a member of a think tank with close ties to the administration have drawn a parallel between corporate taxation and slavery? I don't think so. The remarks by Gramm and Mitchell, while stronger than usual, were indicators of two huge changes in American politics. One is the growing polarization of our politics -- our politicians are less and less inclined to offer even the appearance of moderation. The other is the growing tendency of policy and policy makers to cater to the interests of the wealthy. And I mean the wealthy, not the merely well-off: only someone with a net worth of at least several million dollars is likely to find it worthwhile to become a tax exile.
You don't need a political scientist to tell you that modern American politics is bitterly polarized. But wasn't it always thus? No, it wasn't. From World War II until the 1970's -- the same era during which income inequality was historically low -- political partisanship was much more muted than it is today. That's not just a subjective assessment. My Princeton political science colleagues Nolan McCarty and Howard Rosenthal, together with Keith Poole at the University of Houston, have done a statistical analysis showing that the voting behavior of a congressman is much better predicted by his party affiliation today than it was 25 years ago. In fact, the division between the parties is sharper now than it has been since the 1920's.
What are the parties divided about? The answer is simple: economics. McCarty, Rosenthal and Poole write that ''voting in Congress is highly ideological -- one-dimensional left/right, liberal versus conservative.'' It may sound simplistic to describe Democrats as the party that wants to tax the rich and help the poor, and Republicans as the party that wants to keep taxes and social spending as low as possible. And during the era of middle-class America that would indeed have been simplistic: politics wasn't defined by economic issues. But that was a different country; as McCarty, Rosenthal and Poole put it, ''If income and wealth are distributed in a fairly equitable way, little is to be gained for politicians to organize politics around nonexistent conflicts.'' Now the conflicts are real, and our politics is organized around them. In other words, the growing inequality of our incomes probably lies behind the growing divisiveness of our politics.
But the politics of rich and poor hasn't played out the way you might think. Since the incomes of America's wealthy have soared while ordinary families have seen at best small gains, you might have expected politicians to seek votes by proposing to soak the rich. In fact, however, the polarization of politics has occurred because the Republicans have moved to the right, not because the Democrats have moved to the left. And actual economic policy has moved steadily in favor of the wealthy. The major tax cuts of the past 25 years, the Reagan cuts in the 1980's and the recent Bush cuts, were both heavily tilted toward the very well off. (Despite obfuscations, it remains true that more than half the Bush tax cut will eventually go to the top 1 percent of families.) The major tax increase over that period, the increase in payroll taxes in the 1980's, fell most heavily on working-class families.
The most remarkable example of how politics has shifted in favor of the wealthy -- an example that helps us understand why economic policy has reinforced, not countered, the movement toward greater inequality -- is the drive to repeal the estate tax. The estate tax is, overwhelmingly, a tax on the wealthy. In 1999, only the top 2 percent of estates paid any tax at all, and half the estate tax was paid by only 3,300 estates, 0.16 percent of the total, with a minimum value of $5 million and an average value of $17 million. A quarter of the tax was paid by just 467 estates worth more than $20 million. Tales of family farms and businesses broken up to pay the estate tax are basically rural legends; hardly any real examples have been found, despite diligent searching.
You might have thought that a tax that falls on so few people yet yields a significant amount of revenue would be politically popular; you certainly wouldn't expect widespread opposition. Moreover, there has long been an argument that the estate tax promotes democratic values, precisely because it limits the ability of the wealthy to form dynasties. So why has there been a powerful political drive to repeal the estate tax, and why was such a repeal a centerpiece of the Bush tax cut?
There is an economic argument for repealing the estate tax, but it's hard to believe that many people take it seriously. More significant for members of Congress, surely, is the question of who would benefit from repeal: while those who will actually benefit from estate tax repeal are few in number, they have a lot of money and control even more (corporate C.E.O.'s can now count on leaving taxable estates behind). That is, they are the sort of people who command the attention of politicians in search of campaign funds.
But it's not just about campaign contributions: much of the general public has been convinced that the estate tax is a bad thing. If you try talking about the tax to a group of moderately prosperous retirees, you get some interesting reactions. They refer to it as the ''death tax''; many of them believe that their estates will face punitive taxation, even though most of them will pay little or nothing; they are convinced that small businesses and family farms bear the brunt of the tax.
These misconceptions don't arise by accident. They have, instead, been deliberately promoted. For example, a Heritage Foundation document titled ''Time to Repeal Federal Death Taxes: The Nightmare of the American Dream'' emphasizes stories that rarely, if ever, happen in real life: ''Small-business owners, particularly minority owners, suffer anxious moments wondering whether the businesses they hope to hand down to their children will be destroyed by the death tax bill, . . . Women whose children are grown struggle to find ways to re-enter the work force without upsetting the family's estate tax avoidance plan.'' And who finances the Heritage Foundation? Why, foundations created by wealthy families, of course.
The point is that it is no accident that strongly conservative views, views that militate against taxes on the rich, have spread even as the rich get richer compared with the rest of us: in addition to directly buying influence, money can be used to shape public perceptions. The liberal group People for the American Way's report on how conservative foundations have deployed vast sums to support think tanks, friendly media and other institutions that promote right-wing causes is titled ''Buying a Movement.''
Not to put too fine a point on it: as the rich get richer, they can buy a lot of things besides goods and services. Money buys political influence; used cleverly, it also buys intellectual influence. A result is that growing income disparities in the United States, far from leading to demands to soak the rich, have been accompanied by a growing movement to let them keep more of their earnings and to pass their wealth on to their children.
This obviously raises the possibility of a self-reinforcing process. As the gap between the rich and the rest of the population grows, economic policy increasingly caters to the interests of the elite, while public services for the population at large -- above all, public education -- are starved of resources. As policy increasingly favors the interests of the rich and neglects the interests of the general population, income disparities grow even wider.
VI. Plutocracy?
In 1924, the mansions of Long Island's North Shore were still in their
full glory, as was the political power of the class that owned them.
When Gov. Al Smith of New York proposed building a system of parks on
Long Island, the mansion owners were bitterly opposed. One baron --
Horace Havemeyer, the ''sultan of sugar'' -- warned that North Shore
towns would be ''overrun with rabble from the city.'' ''Rabble?'' Smith
said. ''That's me you're talking about.'' In the end New Yorkers got
their parks, but it was close: the interests of a few hundred wealthy
families nearly prevailed over those of New York City's middle class.
America in the 1920's wasn't a feudal society. But it was a nation in which vast privilege -- often inherited privilege -- stood in contrast to vast misery. It was also a nation in which the government, more often than not, served the interests of the privileged and ignored the aspirations of ordinary people.
Those days are past -- or are they? Income inequality in America has now returned to the levels of the 1920's. Inherited wealth doesn't yet play a big part in our society, but given time -- and the repeal of the estate tax -- we will grow ourselves a hereditary elite just as set apart from the concerns of ordinary Americans as old Horace Havemeyer. And the new elite, like the old, will have enormous political power.
Kevin Phillips concludes his book ''Wealth and Democracy'' with a grim warning: ''Either democracy must be renewed, with politics brought back to life, or wealth is likely to cement a new and less democratic regime -- plutocracy by some other name.'' It's a pretty extreme line, but we live in extreme times. Even if the forms of democracy remain, they may become meaningless. It's all too easy to see how we may become a country in which the big rewards are reserved for people with the right connections; in which ordinary people see little hope of advancement; in which political involvement seems pointless, because in the end the interests of the elite always get served.
Am I being too pessimistic? Even my liberal friends tell me not to worry, that our system has great resilience, that the center will hold. I hope they're right, but they may be looking in the rearview mirror. Our optimism about America, our belief that in the end our nation always finds its way, comes from the past -- a past in which we were a middle-class society. But that was another country.
Paul Krugman is a Times columnist and a professor at Princeton.