ApostateAbe
Veteran Member
- Joined
- Sep 19, 2002
- Messages
- 1,299
- Location
- Colorado, USA
- Basic Beliefs
- Infotheist. I believe the gods to be mere information.
I read The Bell Curve about six years ago, which was the start of my long personal study on race and intelligence. One of the shocking assertions was that academics who study intelligence largely agreed with the authors Herrnstein and Murray about the heritability of race-IQ gap, but they seldom make their opinions known to the public for fear of the taboo. One of my main interests was defending the theory of evolution from creationism, but it was not a fair fight, as creationism poses no serious threat to the science, not even politically. These authors claimed that there was a severe threat to another science from another political camp, and they were being bullied into silence. As It would be as though the young-Earth creationists have won, turning the theory of evolution into a disgraceful taboo, studied only among obscure scientists with an interest in taboo science. I researched for years afterward, and I found the assertion to be largely correct, with many other authors attesting to the same point, even those who disagreed with these academics. I read again and reproduced this section from the introduction of The Bell Curve, titled "Intelligence Besieged" (and also part of the next section). It provides a good introductory education on the heavily strained relationship between the field of intelligence research and the public, explaining why so much of the public is desperately out of touch with the science, often believing it to be no more than useless pseudoscience.
INTELLIGENCE BESIEGED
Then came the 1960s, and a new controversy about intelligence tests
that continues to this day. It arose not from new findings but from a new
outlook on public policy. Beginning with the rise of powerful social democratic
and socialist movements after World War I and accelerating
across the decades until the 1960s, a fundamental shift was taking place
in the received wisdom regarding equality. This was most evident in the
political arena, where the civil rights movement and then the War on
Poverty raised Americans' consciousness about the nature of the inequalities
in American society. But the changes in outlook ran deeper
and broader than politics. Assumptions about the very origins of social
problems changed profoundly. Nowhere was the shift more pervasive
than in the field of psychology.
Psychometricians of the 1930s had debated whether intelligence is
almost entirely produced by genes or whether the environment also
plays a role. By the 1960s and 1970s the point of contention had shifted
dramatically. It had somehow become controversial to claim, especially
in public, that genes had any effect at all on intelligence. Ironically, the
evidence for genetic factors in intelligence had greatly strengthened
during the very period when the terms of the debate were moving in the
other direction.
In the psychological laboratory, there was a similar shift. Psychological
experimenters early in the century were, if anything, more likely to
concentrate on the inborn patterns of human and animal behavior than
on how the learning process could change behavior.18 But from the
1930s to the 1960s, the leading behaviorists, as they were called, and
their students and disciples were almost all specialists in learning theory.
They filled the technical journals with the results of learning experiments
on rats and pigeons, the tacit implication being that genetic
endowment mattered so little that we could ignore the differences
among species, let alone among human individuals, and still discover
enough about the learning process to make it useful and relevant to
human concerns.19 There are, indeed, aspects of the learning process
that cross the lines between species, but there are also enormous differences,
and these differences were sometimes ignored or minimized when
psychologists explained their findings to the lay public. B. F. Skinner, at
Harvard University, more than any other of the leading behaviorists,
broke out of the academic world into public attention with books that
applied the findings of laboratory research on animals to human society
at large.20
To those who held the behaviorist view, human potential was almost
perfectly malleable, shaped by the environment. The causes of human
deficiencies in intelligence--or parenting, or social behavior, or work
behavior-lay outside the individual. They were caused by flaws in society.
Sometimes capitalism was blamed, sometimes an uncaring or in-
competent government. Further, the causes of these deficiencies could be
fixed by the right public policies--redistribution of wealth, better education,
better housing and medical care. Once these environmental causes
were removed, the deficiencies should vanish as well, it was argued.
The contrary notion--that individual differences could not easily be
diminished by government intervention--collided head-on with the
enthusiasm for egalitarianism, which itself collided head-on with a half-century
of IQ data indicating that differences in intelligence are intractable
and significantly heritable and that the average IQ of various
socioeconomic and ethnic groups differs.
In 1969, Arthur Jensen, an educational psychologist and expert on
testing from the University of California at Berkeley, put a match to this
volatile mix of science and ideology with an article in the Harvard Educational
Review.21 Asked by the Review's editors to consider why com-
pensatory and remedial education programs begun with such high hopes
during the War on Poverty had yielded such disappointing results,
Jensen concluded that the programs were bound to have little success
because they were aimed at populations of youngsters with relatively
low IQs, and success in school depended to a considerable degree on IQ.
IQ had a large heritable component, Jensen also noted. The article fur.
ther disclosed that the youngsters in the targeted populations were disproportionately
black and that historically blacks as a population had
exhibited average IQs substantially below those of whites.
The reaction to Jensen's article was immediate and violent. From
1969 through the mid-1970s, dozens of books and hundreds of articles
appeared denouncing the use of IQ tests and arguing that mental abil-
ities are determined by environment, with the genes playing a minor
role and race none at all. Jensen's name became synonymous with a constellation
of hateful ways of thinking. "It perhaps is impossible to exaggerate
the importance of the Jensen disgrace," wrote Jerry Hirsch, a
psychologist specializing in the genetics of animal behavior who was
among Jensen's more vehement critics. "It has permeated both science
and the universities and hoodwinked large segments of government and
society. Like Vietnam and Watergate, it is a contemporary symptom of
serious affliction."22 The title of Hirsch's article was "The Bankruptcy
of 'Science' Without Scholarship." During the first few years after the
Harvard Educational Review article was published, Jensen could appear
in no public forum in the United States without triggering something
perilously close to a riot.
The uproar was exacerbated by William Shockley, who had won the
Nobel Prize in physics for his contributions to the invention of the transistor
but had turned his attention to human variation toward the end
of his career. As eccentric as he was brilliant, he often recalled the eugenicists
of the early decades of the century. He proposed, as a "thought
exercise," a scheme for paying people with low IQs to be sterilized.23 He
supported (and contributed to) a sperm bank for geniuses. He seemed
to relish expressing sensitive scientific findings in a way that would outrage
or disturb as many people as possible. Jensen and Shockley, utterly
unlike as they were in most respects, soon came to be classed together
as a pair of racist intellectual cranks.
Then one of us, Richard Herrnstein, an experimental psychologist at
Harvard, strayed into forbidden territory with an article in the September
1971 Atlantic Monthly.24 Herrnstein barely mentioned race, but
he did talk about heritability of IQ. His proposition, put in the form of
a syllogism, was that because IQ is substantially heritable, because economic
success in life depends in part on the talents measured by IQ tests,
and because social standing depends in part on economic success, it follows
that social standing is bound to be based to some extent on inherited
differences. By 1971, this had become a controversial thing to say.
In media accounts of intelligence, the names Jensen, Shockley, and
Herrnstein became roughly interchangeable.
That same year, 1971, the U.S. Supreme Court outlawed the use of
standardized ability tests by employers unless they had a "manifest relationship"
to the specific job in question because, the Supreme Court
held, standardized tests acted as "built-in headwinds" for minority groups,
even in the absence of discriminatory intent.25 A year later, the National
Education Association called upon the nation's schools to impose a
moratorium on all standardized intelligence testing, hypothesizing that
"a third or more of American citizens are intellectually folded, mutilated
or spindled before they have a chance to get through elementary school
because of linguistically or culturally biased standardized tests."26 A
movement that had begun in the 1960s gained momentum in the early
1970s, as major school systems throughout the country, including those
of Chicago, New York, and Los Angeles, limited or banned the use of
group-administered standardized tests in public schools. A number of colleges
announced that they would no longer require the Scholastic Aptitude
Test as part of the admissions process. The legal movement against
tests reached its apogee in 1978 in the case of Lawy P. Judge Robert Peckham
of the U.S. District Court in San Francisco ruled that it was unconstitutional
to use IQ tests for placement of children in classes for the
educably mentally retarded if the use of those tests resulted in placement
of "grossly disproportionate" numbers of black children.27
Meanwhile, the intellectual debate had taken a new and personalized
turn. Those who claimed that intelligence was substantially inherited
were not just wrong, the critics now discovered, they were charlatans as
well. Leon Kamin, a psychologist then at Princeton, opened this phase
of the debate with a 1974 book, The Science and Politics of IQ. "Patriotism,
we have been told, is the last refuge of scoundrels," Kamin wrote
in the opening pages. "Psychologists and biologists might consider the
possibility that heritability is the first."28 Kamin went on to charge that
mental testing and belief in the heritability of IQ in particular had been
fostered by people with right-wing political views and racist social views.
They had engaged in pseudoscience, he wrote, suppressing the data they
did not like and exaggerating the data that agreed with their preconceptions.
Examined carefully, the case for the heritability of IQ was nil,
concluded Kamin.
In 1976, a British journalist, Oliver Gillie, published an article in the
London Sunday Times that seemed to confirm Kamin's thesis with a sensational
revelation: The recently deceased Cyril Burt, Britain's most eminent
psychometrician, author of the largest and most famous study of
the intelligence of identical twins who grew up apart, was charged with
fraud.29 He had made up data, fudged his results, and invented coauthors,
the Sunday Times declared. The subsequent scandal was as big as
the Piltdown Man hoax. Cyril Burt had not been just another researcher
but one of the giants of twentieth-century psychology. Nor could his
colleagues find a ready defense (the defense came later, as described in
the box). They protested that the revelations did not compromise the
great bulk of the work that bore on the issue of heritability, but their defenses
sounded feeble in the light of the suspicions that had preceded
Burt's exposure.
For the public observing the uproar in the academy from the sidelines,
the capstone of the assault on the integrity of the discipline occurred
in 1981 when Harvard paleobiologist Stephen Jay Gould, author
of several popular books on biology, published The Mismeasure of Man.32
Gould examined the history of intelligence testing, found that it was
peopled by charlatans, racists, and self-deluded fools, and concluded
that "determinist arguments for ranking people according to a single
scale of intelligence, no matter how numerically sophisticated, have
recorded little more than social prejudice."33 The Mismeasure of Man
became a best-seller and won the National Book Critics Circle
Award.
Gould and his allies had won the visible battle. By the early 1980s, a
new received wisdom about intelligence had been formed that went
roughly as follows:
Intelligence is a bankrupt concept. Whatever it might mean--and nobody
really knows even how to define it--intelligence is so ephemeral that no one
can measure it accurately. IQ tests are, of course, culturally biased, and
so are all the other "aptitude" tests, such as the SAT. To the extent that
tests such as IQ and SAT measure anything, it certainly is not an innate
"intelligence." IQ scores are not constant; they often change significantly
over an individual's life span. The scores of entire populations can be expected
to change over time--look at the Jews, who early in the twentieth
century scored below average on IQ scores and now score well above the
average. Furthermore, the tests are nearly useless as tools, as confirmed
by the well-documented fact that such tests do not predict anything except
success in school. Earnings, occupation, productivity--all the important
measures of success--are unrelated to the test scores. All that tests really
accomplish is to label youngsters, stigmatizing the ones who do not do well
and creating a self-fulfilling prophecy that injures the socioeconomically disadvantaged
in general and blacks in particular.
INTELLIGENCE REDUX
As far as public discussion is concerned, this collection of beliefs, with
some variations, remains the state of wisdom about cognitive abilities
and IQ tests. It bears almost no relation to the current state of knowledge
among scholars in the field, however, and therein lies a tale. The
dialogue about testing has been conducted at two levels during the last
two decades--the visible one played out in the press and the subter-
ranean one played out in the technical journals and books.
The case of Arthur Jensen is illustrative. To the public, he surfaced
briefly, published an article that was discredited, and fell back into obscurity.
Within the world of psychometrics, however, he continued to
be one of the profession's most prolific scholars, respected for his metic-
ulous research by colleagues of every theoretical stripe. Jensen had not
recanted. He continued to build on the same empirical findings that had
gotten him into such trouble in the 1960s, but primarily in technical
publications, where no one outside the profession had to notice. The
same thing was happening throughout psychometrics. In the 1970s,
scholars observed that colleagues who tried to say publicly that IQ tests
had merit, or that intelligence was substantially inherited, or even that
intelligence existed as a definable and measurable human quality, paid
too high a price. Their careers, family lives, relationships with colleagues,
and even physical safety could be jeopardized by speaking out.
Why speak out when there was no compelling reason to do so? Research
on cognitive abilities continued to flourish, hut only in the sanctuary of
the ivory tower.
In this cloistered environment, the continuing debate about intelli-
gence was conducted much as debates are conducted within any other
academic discipline. The public controversy had surfaced some genuine
issues, and the competing parties set about trying to resolve them. Con-
troversial hypotheses were put to the test. Sometimes they were confirmed,
sometimes rejected. Often they led to new questions, which were
then explored. Substantial progress was made, Many of the issues that
created such a public furor in the 1970s were resolved, and the study of
cognitive abilities went on to explore new areas.
This is not to say that controversy has ended, only that the controversy
within the professional intelligence testing community is much
different from that outside it. The issues that seem most salient in articles
in the popular press (Isn't intelligence determined mostly by environment?
Aren't the tests useless because they're biased?) are not major
topics of debate within the profession. On many of the publicly discussed
questions, a scholarly consensus has been reached.34 Rather, the contending
parties within the professional community divide along other
lines. By the early 1990s, they could be roughly divided into three factions
for our purposes: the classicists, the revisionists, and the radicals.
Then came the 1960s, and a new controversy about intelligence tests
that continues to this day. It arose not from new findings but from a new
outlook on public policy. Beginning with the rise of powerful social democratic
and socialist movements after World War I and accelerating
across the decades until the 1960s, a fundamental shift was taking place
in the received wisdom regarding equality. This was most evident in the
political arena, where the civil rights movement and then the War on
Poverty raised Americans' consciousness about the nature of the inequalities
in American society. But the changes in outlook ran deeper
and broader than politics. Assumptions about the very origins of social
problems changed profoundly. Nowhere was the shift more pervasive
than in the field of psychology.
Psychometricians of the 1930s had debated whether intelligence is
almost entirely produced by genes or whether the environment also
plays a role. By the 1960s and 1970s the point of contention had shifted
dramatically. It had somehow become controversial to claim, especially
in public, that genes had any effect at all on intelligence. Ironically, the
evidence for genetic factors in intelligence had greatly strengthened
during the very period when the terms of the debate were moving in the
other direction.
In the psychological laboratory, there was a similar shift. Psychological
experimenters early in the century were, if anything, more likely to
concentrate on the inborn patterns of human and animal behavior than
on how the learning process could change behavior.18 But from the
1930s to the 1960s, the leading behaviorists, as they were called, and
their students and disciples were almost all specialists in learning theory.
They filled the technical journals with the results of learning experiments
on rats and pigeons, the tacit implication being that genetic
endowment mattered so little that we could ignore the differences
among species, let alone among human individuals, and still discover
enough about the learning process to make it useful and relevant to
human concerns.19 There are, indeed, aspects of the learning process
that cross the lines between species, but there are also enormous differences,
and these differences were sometimes ignored or minimized when
psychologists explained their findings to the lay public. B. F. Skinner, at
Harvard University, more than any other of the leading behaviorists,
broke out of the academic world into public attention with books that
applied the findings of laboratory research on animals to human society
at large.20
To those who held the behaviorist view, human potential was almost
perfectly malleable, shaped by the environment. The causes of human
deficiencies in intelligence--or parenting, or social behavior, or work
behavior-lay outside the individual. They were caused by flaws in society.
Sometimes capitalism was blamed, sometimes an uncaring or in-
competent government. Further, the causes of these deficiencies could be
fixed by the right public policies--redistribution of wealth, better education,
better housing and medical care. Once these environmental causes
were removed, the deficiencies should vanish as well, it was argued.
The contrary notion--that individual differences could not easily be
diminished by government intervention--collided head-on with the
enthusiasm for egalitarianism, which itself collided head-on with a half-century
of IQ data indicating that differences in intelligence are intractable
and significantly heritable and that the average IQ of various
socioeconomic and ethnic groups differs.
In 1969, Arthur Jensen, an educational psychologist and expert on
testing from the University of California at Berkeley, put a match to this
volatile mix of science and ideology with an article in the Harvard Educational
Review.21 Asked by the Review's editors to consider why com-
pensatory and remedial education programs begun with such high hopes
during the War on Poverty had yielded such disappointing results,
Jensen concluded that the programs were bound to have little success
because they were aimed at populations of youngsters with relatively
low IQs, and success in school depended to a considerable degree on IQ.
IQ had a large heritable component, Jensen also noted. The article fur.
ther disclosed that the youngsters in the targeted populations were disproportionately
black and that historically blacks as a population had
exhibited average IQs substantially below those of whites.
The reaction to Jensen's article was immediate and violent. From
1969 through the mid-1970s, dozens of books and hundreds of articles
appeared denouncing the use of IQ tests and arguing that mental abil-
ities are determined by environment, with the genes playing a minor
role and race none at all. Jensen's name became synonymous with a constellation
of hateful ways of thinking. "It perhaps is impossible to exaggerate
the importance of the Jensen disgrace," wrote Jerry Hirsch, a
psychologist specializing in the genetics of animal behavior who was
among Jensen's more vehement critics. "It has permeated both science
and the universities and hoodwinked large segments of government and
society. Like Vietnam and Watergate, it is a contemporary symptom of
serious affliction."22 The title of Hirsch's article was "The Bankruptcy
of 'Science' Without Scholarship." During the first few years after the
Harvard Educational Review article was published, Jensen could appear
in no public forum in the United States without triggering something
perilously close to a riot.
The uproar was exacerbated by William Shockley, who had won the
Nobel Prize in physics for his contributions to the invention of the transistor
but had turned his attention to human variation toward the end
of his career. As eccentric as he was brilliant, he often recalled the eugenicists
of the early decades of the century. He proposed, as a "thought
exercise," a scheme for paying people with low IQs to be sterilized.23 He
supported (and contributed to) a sperm bank for geniuses. He seemed
to relish expressing sensitive scientific findings in a way that would outrage
or disturb as many people as possible. Jensen and Shockley, utterly
unlike as they were in most respects, soon came to be classed together
as a pair of racist intellectual cranks.
Then one of us, Richard Herrnstein, an experimental psychologist at
Harvard, strayed into forbidden territory with an article in the September
1971 Atlantic Monthly.24 Herrnstein barely mentioned race, but
he did talk about heritability of IQ. His proposition, put in the form of
a syllogism, was that because IQ is substantially heritable, because economic
success in life depends in part on the talents measured by IQ tests,
and because social standing depends in part on economic success, it follows
that social standing is bound to be based to some extent on inherited
differences. By 1971, this had become a controversial thing to say.
In media accounts of intelligence, the names Jensen, Shockley, and
Herrnstein became roughly interchangeable.
That same year, 1971, the U.S. Supreme Court outlawed the use of
standardized ability tests by employers unless they had a "manifest relationship"
to the specific job in question because, the Supreme Court
held, standardized tests acted as "built-in headwinds" for minority groups,
even in the absence of discriminatory intent.25 A year later, the National
Education Association called upon the nation's schools to impose a
moratorium on all standardized intelligence testing, hypothesizing that
"a third or more of American citizens are intellectually folded, mutilated
or spindled before they have a chance to get through elementary school
because of linguistically or culturally biased standardized tests."26 A
movement that had begun in the 1960s gained momentum in the early
1970s, as major school systems throughout the country, including those
of Chicago, New York, and Los Angeles, limited or banned the use of
group-administered standardized tests in public schools. A number of colleges
announced that they would no longer require the Scholastic Aptitude
Test as part of the admissions process. The legal movement against
tests reached its apogee in 1978 in the case of Lawy P. Judge Robert Peckham
of the U.S. District Court in San Francisco ruled that it was unconstitutional
to use IQ tests for placement of children in classes for the
educably mentally retarded if the use of those tests resulted in placement
of "grossly disproportionate" numbers of black children.27
Meanwhile, the intellectual debate had taken a new and personalized
turn. Those who claimed that intelligence was substantially inherited
were not just wrong, the critics now discovered, they were charlatans as
well. Leon Kamin, a psychologist then at Princeton, opened this phase
of the debate with a 1974 book, The Science and Politics of IQ. "Patriotism,
we have been told, is the last refuge of scoundrels," Kamin wrote
in the opening pages. "Psychologists and biologists might consider the
possibility that heritability is the first."28 Kamin went on to charge that
mental testing and belief in the heritability of IQ in particular had been
fostered by people with right-wing political views and racist social views.
They had engaged in pseudoscience, he wrote, suppressing the data they
did not like and exaggerating the data that agreed with their preconceptions.
Examined carefully, the case for the heritability of IQ was nil,
concluded Kamin.
In 1976, a British journalist, Oliver Gillie, published an article in the
London Sunday Times that seemed to confirm Kamin's thesis with a sensational
revelation: The recently deceased Cyril Burt, Britain's most eminent
psychometrician, author of the largest and most famous study of
the intelligence of identical twins who grew up apart, was charged with
fraud.29 He had made up data, fudged his results, and invented coauthors,
the Sunday Times declared. The subsequent scandal was as big as
the Piltdown Man hoax. Cyril Burt had not been just another researcher
but one of the giants of twentieth-century psychology. Nor could his
colleagues find a ready defense (the defense came later, as described in
the box). They protested that the revelations did not compromise the
great bulk of the work that bore on the issue of heritability, but their defenses
sounded feeble in the light of the suspicions that had preceded
Burt's exposure.
For the public observing the uproar in the academy from the sidelines,
the capstone of the assault on the integrity of the discipline occurred
in 1981 when Harvard paleobiologist Stephen Jay Gould, author
of several popular books on biology, published The Mismeasure of Man.32
Gould examined the history of intelligence testing, found that it was
peopled by charlatans, racists, and self-deluded fools, and concluded
that "determinist arguments for ranking people according to a single
scale of intelligence, no matter how numerically sophisticated, have
recorded little more than social prejudice."33 The Mismeasure of Man
became a best-seller and won the National Book Critics Circle
Award.
Gould and his allies had won the visible battle. By the early 1980s, a
new received wisdom about intelligence had been formed that went
roughly as follows:
Intelligence is a bankrupt concept. Whatever it might mean--and nobody
really knows even how to define it--intelligence is so ephemeral that no one
can measure it accurately. IQ tests are, of course, culturally biased, and
so are all the other "aptitude" tests, such as the SAT. To the extent that
tests such as IQ and SAT measure anything, it certainly is not an innate
"intelligence." IQ scores are not constant; they often change significantly
over an individual's life span. The scores of entire populations can be expected
to change over time--look at the Jews, who early in the twentieth
century scored below average on IQ scores and now score well above the
average. Furthermore, the tests are nearly useless as tools, as confirmed
by the well-documented fact that such tests do not predict anything except
success in school. Earnings, occupation, productivity--all the important
measures of success--are unrelated to the test scores. All that tests really
accomplish is to label youngsters, stigmatizing the ones who do not do well
and creating a self-fulfilling prophecy that injures the socioeconomically disadvantaged
in general and blacks in particular.
INTELLIGENCE REDUX
As far as public discussion is concerned, this collection of beliefs, with
some variations, remains the state of wisdom about cognitive abilities
and IQ tests. It bears almost no relation to the current state of knowledge
among scholars in the field, however, and therein lies a tale. The
dialogue about testing has been conducted at two levels during the last
two decades--the visible one played out in the press and the subter-
ranean one played out in the technical journals and books.
The case of Arthur Jensen is illustrative. To the public, he surfaced
briefly, published an article that was discredited, and fell back into obscurity.
Within the world of psychometrics, however, he continued to
be one of the profession's most prolific scholars, respected for his metic-
ulous research by colleagues of every theoretical stripe. Jensen had not
recanted. He continued to build on the same empirical findings that had
gotten him into such trouble in the 1960s, but primarily in technical
publications, where no one outside the profession had to notice. The
same thing was happening throughout psychometrics. In the 1970s,
scholars observed that colleagues who tried to say publicly that IQ tests
had merit, or that intelligence was substantially inherited, or even that
intelligence existed as a definable and measurable human quality, paid
too high a price. Their careers, family lives, relationships with colleagues,
and even physical safety could be jeopardized by speaking out.
Why speak out when there was no compelling reason to do so? Research
on cognitive abilities continued to flourish, hut only in the sanctuary of
the ivory tower.
In this cloistered environment, the continuing debate about intelli-
gence was conducted much as debates are conducted within any other
academic discipline. The public controversy had surfaced some genuine
issues, and the competing parties set about trying to resolve them. Con-
troversial hypotheses were put to the test. Sometimes they were confirmed,
sometimes rejected. Often they led to new questions, which were
then explored. Substantial progress was made, Many of the issues that
created such a public furor in the 1970s were resolved, and the study of
cognitive abilities went on to explore new areas.
This is not to say that controversy has ended, only that the controversy
within the professional intelligence testing community is much
different from that outside it. The issues that seem most salient in articles
in the popular press (Isn't intelligence determined mostly by environment?
Aren't the tests useless because they're biased?) are not major
topics of debate within the profession. On many of the publicly discussed
questions, a scholarly consensus has been reached.34 Rather, the contending
parties within the professional community divide along other
lines. By the early 1990s, they could be roughly divided into three factions
for our purposes: the classicists, the revisionists, and the radicals.