context
stringclasses
269 values
id_string
stringlengths
15
16
answers
sequencelengths
5
5
label
int64
0
4
question
stringlengths
34
417
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_1
[ "Folk artists employed more principles of African oral tradition in their works than did Phillis Wheatley in her poetry.", "Although Phillis Wheatley had to overcome significant barriers in learning English, she mastered the literary conventions of eighteenth-century English as well as African aesthetic canons.", "Phillis Wheatley's poetry did not fulfill the potential inherent in her experience but did represent a significant accomplishment.", "The evolution of a distinctive African American literary language can be traced from the creations of African American folk artists to the poetry of Phillis Wheatley.", "Phillis Wheatley joined with African American folk artists in preserving the principles of the African oral tradition." ]
2
Which one of the following best expresses the main idea of the passage?
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_2
[ "translated Italian literary forms into the American idiom", "combined Italian and American literary traditions into a new form of poetic expression", "contributed to the development of a distinctive Italian American literary style", "defined artistic expression in terms of eighteenth-century Italian poetic conventions", "adopted the language and forms of modern American poetry" ]
4
The approach to poetry taken by a modern-day Italian immigrant in America would be most analogous to Phillis Wheatley's approach, as it is described in the passage, if the immigrant
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_3
[ "the religious music of colonists in New England", "the folk art of colonists in New England", "formal written English", "American speech patterns", "eighteenth-century aesthetic principles" ]
3
According to the passage, African languages had a notable influence on
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_4
[ "cannot be written by those who are not raised knowing its conventions", "has little influence on the way language is actually spoken", "substitutes its own conventions for the aesthetic principles of the past", "does not admit the use of street language and casual talk", "is ultimately rejected because its conventions leave little room for further development" ]
3
By a "closed system" of poetry (lines 34–35), the author most probably means poetry that
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_5
[ "generalized feelings", "Americanized English", "themes from folk art", "casual talk", "Black speech" ]
0
According to the passage, the standards of eighteenth-century English poetry permitted Wheatley to include which one of the following in her poetry?
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_6
[ "Wheatley's poetry was admired in England for its faithfulness to the conventions of neoclassical poetry.", "Wheatley compiled a history in English of her family's experiences in Africa and America.", "The language barriers that Wheatley overcame were eventually transcended by all who were brought from Africa as slaves.", "Several modern African American poets acknowledge the importance of Wheatley's poetry to American literature.", "Scholars trace themes and expressions in African American poetry back to the poetry of Wheatley." ]
4
Which one of the following, if true, would most weaken the author's argument concerning the role that Wheatley played in the evolution of an African American literary language?
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_7
[ "affected the manner in which slaves and freed Black people spoke English", "defined African American artistic expression in terms of earlier works", "adopted the standards of eighteenth-century English poetry", "combined elements of the English literary tradition with those of the African oral tradition", "focused on the barriers that written English literary forms presented to Black artists" ]
3
It can be inferred that the author of the passage would most probably have praised Phillis Wheatley's poetry more if it had
For the poet Phillis Wheatley, who was brought to colonial New England as a slave in 1761, the formal literary code of eighteenth-century English was thrice removed: by the initial barrier of the unfamiliar English language, by the discrepancy between spoken and literary forms of English, and by the African tradition of oral rather than written verbal art. Wheatley transcended these barriers—she learned the English language and English literary forms so quickly and well that she was composing good poetry in English within a few years of her arrival in New England. Wheatley's experience exemplifies the meeting of oral and written literary cultures. The aesthetic principles of the African oral tradition were preserved in America by folk artists in work songs, dancing, field hollers, religious music, the use of the drum, and, after the drum was forbidden, in the perpetuation of drum effects in song. African languages and the functions of language in African societies not only contributed to the emergence of a distinctive Black English but also exerted demonstrable effects on the manner in which other Americans spoke English. Given her African heritage and her facility with English and the conventions of English poetry, Wheatley's work had the potential to apply the ideas of a written literature to an oral literary tradition in the creation of an African American literary language. But this was a potential that her poetry unfortunately did not exploit. The standards of eighteenth-century English poetry, which itself reflected little of the American language, led Wheatley to develop a notion of poetry as a closed system, derived from imitation of earlier written works. No place existed for the rough-and-ready Americanized English she heard in the streets, for the English spoken by Black people, or for Africanisms. The conventions of eighteenth-century neoclassical poetry ruled out casual talk; her voice and feelings had to be generalized according to rules of poetic diction and characterization; the particulars of her African past, if they were to be dealt with at all, had to be subordinated to the reigning conventions. African poetry did not count as poetry in her new situation, and African aesthetic canons were irrelevant to the new context because no linguistic or social framework existed to reinforce them. Wheatley adopted a foreign language and a foreign literary tradition; they were not extensions of her past experience, but replacements. Thus limited by the eighteenth-century English literary code, Wheatley's poetry contributed little to the development of a distinctive African American literary language. Yet by the standards of the literary conventions in which she chose to work, Wheatley's poetry is undeniably accomplished, and she is justly celebrated as the first Black American poet.
199106_1-RC_1_8
[ "enthusiastic advocacy", "qualified admiration", "dispassionate impartiality", "detached ambivalence", "perfunctory dismissal" ]
1
Which one of the following most accurately characterizes the author's attitude with respect to Phillis Wheatley's literary accomplishments?
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_9
[ "Antithetical scientific disciplines can both stimulate and hinder one another's research in complex ways.", "Antithetical scientific disciplines often interact with one another in ways that can be highly useful.", "As disciplines such as cytology and biochemistry advance, their interaction necessarily leads to a synthesis of their approaches.", "Cell research in the late nineteenth century was plagued by disagreements between cytologists and biochemists.", "In the late nineteenth century, cytologists and biochemists made many valuable discoveries that advanced scientific understanding of the cell." ]
1
Which one of the following best states the central idea of the passage?
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_10
[ "maps of chromosomes", "chemical nature of protoplasm", "spatial relationship of molecules within the cell", "role of enzymes in biological processes", "sequence of the movement of chromosomes during cell division" ]
4
The passage states that in the late nineteenth century cytologists deduced the
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_11
[ "among cytologists", "among biochemists", "between cytologists and biochemists", "between cytologists and geneticists", "between biochemists and geneticists" ]
0
It can be inferred from the passage that in the late nineteenth century the debate over the structural nature of protoplasm (lines 25–29) was most likely carried on
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_12
[ "the methods of biochemistry were inadequate to account for all of the chemical reactions that occurred in cytological preparations", "the methods of biochemistry could not adequately discover and explain the structures of living cells", "biochemists were not interested in the nature of protoplasm", "biochemists were not interested in cell division", "biochemists were too ignorant of cytology to understand the basic processes of the cell" ]
1
According to the passage, cytologists in the late nineteenth century were critical of the cell research of biochemists because cytologists believed that
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_13
[ "restate the author's own conclusions", "provide new evidence about the relationship of cytology to biochemistry", "summarize the position of the biochemists described in the passage", "illustrate the difficulties encountered in the synthesis of disciplines", "emphasize the ascendancy of the theories of biochemists over those of cytologists" ]
0
The author quotes Fruton (lines 62–64) primarily in order to
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_14
[ "The theory was formulated before the appearance of molecular biology.", "The theory was formulated before the initial discovery of cell architecture.", "The theory was formulated after the completion of chromosome mapping.", "The theory was formulated after a synthesis of the ideas of cytologists and biochemists had occurred.", "The theory was formulated at the same time as the beginning of the debate over the nature of protoplasm." ]
0
Which one of the following inferences about when the enzyme theory of life was formulated can be drawn from the passage?
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_15
[ "The secret of cell function resides in the structure of the cell.", "Only by discovering the chemical composition of protoplasm can the processes of the cell be understood.", "Scientific knowledge about the chemical composition of the cell can help to explain behavioral patterns in organisms.", "The most important issue to be resolved with regard to the cell is determining the physical characteristics of protoplasm.", "The methods of chemistry must be supplemented before a full account of the cell's structures can be made." ]
1
Which one of the following statements about cells is most compatible with the views of late nineteenth-century biochemists as those views are described in the passage?
One scientific discipline, during its early stages of development, is often related to another as an antithesis to its thesis. The thesis discipline tends to concern itself with discovery and classification of phenomena, to offer holistic explanations emphasizing pattern and form, and to use existing theory to explain the widest possible range of phenomena. The paired or antidiscipline, on the other hand, can be characterized by a more focused approach, concentrating on the units of construction, and by a belief that the discipline can be reformulated in terms of the issues and explanations of the antidiscipline. The relationship of cytology (cell biology) to biochemistry in the late nineteenth century, when both disciplines were growing at a rapid pace, exemplifies such a pattern. Researchers in cell biology found mounting evidence of an intricate cell architecture. They also deduced the mysterious choreography of the chromosomes during cell division. Many biochemists, on the other hand, remained skeptical of the idea that so much structure existed, arguing that the chemical reactions that occur in cytological preparations might create the appearance of such structures. Also, they stood apart from the debate then raging over whether protoplasm, the complex of living material within a cell, is homogeneous, network-like, granular, or foamlike. Their interest lay in the more "fundamental" issues of the chemical nature of protoplasm, especially the newly formulated enzyme theory of life. In general, biochemists judged to be too ignorant of chemistry to grasp the basic processes, whereas cytologists considered the methods of biochemists inadequate to characterize the structures of the living cell. The renewal of Mendelian genetics and, later, progress in chromosome mapping did little at first to effect a synthesis. Both sides were essentially correct. Biochemistry has more than justified its extravagant early claims by explaining so much of the cellular machinery. But in achieving this feat (mostly since 1950) it has been partially transformed into the new discipline of molecular biology—biochemistry that deals with spatial arrangements and movements of large molecules. At the same time cytology has metamorphosed into modern cellular biology. Aided by electron microscopy, it has become more similar in language and outlook to molecular biology. The interaction of a discipline and its antidiscipline has moved both sciences toward a synthesis, namely molecular genetics. This interaction between paired disciplines can have important results. In the case of late nineteenth-century cell research, progress was fueled by competition among the various attitudes and issues derived from cell biology and biochemistry. Joseph Fruton, a biochemist, has suggested that such competition and the resulting tensions among researchers are a principal source of vitality and "are likely to lead to unexpected and exciting novelties in the future, as they have in the past."
199106_1-RC_2_16
[ "An account of a process is given, and then the reason for its occurrence is stated.", "A set of examples is provided, and then a conclusion is drawn from them.", "A general proposition is stated, and then an example is given.", "A statement of principles is made, and then a rationale for them is debated.", "A problem is analyzed, and then a possible solution is discussed." ]
2
Which one of the following best describes the organization of the material presented in the passage?
There are two major systems of criminal procedure in the modern world—the adversarial and the inquisitorial. Both systems were historically preceded by the system of private vengeance in which the victim of a crime fashioned a remedy and administered it privately, either personally or through an agent. The modern adversarial system is only one historical step removed from the private vengeance system and still retains some of its characteristic features. For example, even though the right to initiate legal action against a criminal has now been extended to all members of society (as represented by the office of the public prosecutor), and even though the police department has effectively assumed the pretrial investigative functions on behalf of the prosecution, the adversarial system still leaves the defendant to conduct his or her own pretrial investigation. The trial is viewed as a forensic duel between two adversaries, presided over by a judge who, at the start, has no knowledge of the investigative background of the case. In the final analysis the adversarial system of criminal procedure symbolizes and regularizes punitive combat. By contrast, the inquisitorial system begins historically where the adversarial system stopped its development. It is two historical steps removed from the system of private vengeance. From the standpoint of legal anthropology, then, it is historically superior to the adversarial system. Under the inquisitorial system, the public prosecutor has the duty to investigate not just on behalf of society but also on behalf of the defendant. Additionally, the public prosecutor has the duty to present the court not only evidence that would convict the defendant, but also evidence that could prove the defendant's innocence. The system mandates that both parties permit full pretrial discovery of the evidence in their possession. Finally, an aspect of the system that makes the trial less like a duel between two adversarial parties is that the inquisitorial system mandates that the judge take an active part in the conduct of the trial, with a role that is both directive and protective. Fact-finding is at the heart of the inquisitorial system. This system operates on the philosophical premise that in a criminal action the crucial factor is the body of facts, not the legal rule (in contrast to the adversarial system), and the goal of the entire procedure is to attempt to recreate, in the mind of the court, the commission of the alleged crime. Because of the inquisitorial system's thoroughness in conducting its pretrial investigation, it can be concluded that, if given the choice, a defendant who is innocent would prefer to be tried under the inquisitorial system, whereas a defendant who is guilty would prefer to be tried under the adversarial system.
199106_1-RC_3_17
[ "rules of legality", "dramatic reenactment of the crime", "the search for relevant facts", "the victim's personal pursuit of revenge", "police testimony about the crime" ]
0
It can be inferred from the passage that the crucial factor in a trial under the adversarial system is
There are two major systems of criminal procedure in the modern world—the adversarial and the inquisitorial. Both systems were historically preceded by the system of private vengeance in which the victim of a crime fashioned a remedy and administered it privately, either personally or through an agent. The modern adversarial system is only one historical step removed from the private vengeance system and still retains some of its characteristic features. For example, even though the right to initiate legal action against a criminal has now been extended to all members of society (as represented by the office of the public prosecutor), and even though the police department has effectively assumed the pretrial investigative functions on behalf of the prosecution, the adversarial system still leaves the defendant to conduct his or her own pretrial investigation. The trial is viewed as a forensic duel between two adversaries, presided over by a judge who, at the start, has no knowledge of the investigative background of the case. In the final analysis the adversarial system of criminal procedure symbolizes and regularizes punitive combat. By contrast, the inquisitorial system begins historically where the adversarial system stopped its development. It is two historical steps removed from the system of private vengeance. From the standpoint of legal anthropology, then, it is historically superior to the adversarial system. Under the inquisitorial system, the public prosecutor has the duty to investigate not just on behalf of society but also on behalf of the defendant. Additionally, the public prosecutor has the duty to present the court not only evidence that would convict the defendant, but also evidence that could prove the defendant's innocence. The system mandates that both parties permit full pretrial discovery of the evidence in their possession. Finally, an aspect of the system that makes the trial less like a duel between two adversarial parties is that the inquisitorial system mandates that the judge take an active part in the conduct of the trial, with a role that is both directive and protective. Fact-finding is at the heart of the inquisitorial system. This system operates on the philosophical premise that in a criminal action the crucial factor is the body of facts, not the legal rule (in contrast to the adversarial system), and the goal of the entire procedure is to attempt to recreate, in the mind of the court, the commission of the alleged crime. Because of the inquisitorial system's thoroughness in conducting its pretrial investigation, it can be concluded that, if given the choice, a defendant who is innocent would prefer to be tried under the inquisitorial system, whereas a defendant who is guilty would prefer to be tried under the adversarial system.
199106_1-RC_3_18
[ "passive observer", "biased referee", "uninvolved administrator", "aggressive investigator", "involved manager" ]
4
The author sees the judge's primary role in a trial under the inquisitorial system as that of
There are two major systems of criminal procedure in the modern world—the adversarial and the inquisitorial. Both systems were historically preceded by the system of private vengeance in which the victim of a crime fashioned a remedy and administered it privately, either personally or through an agent. The modern adversarial system is only one historical step removed from the private vengeance system and still retains some of its characteristic features. For example, even though the right to initiate legal action against a criminal has now been extended to all members of society (as represented by the office of the public prosecutor), and even though the police department has effectively assumed the pretrial investigative functions on behalf of the prosecution, the adversarial system still leaves the defendant to conduct his or her own pretrial investigation. The trial is viewed as a forensic duel between two adversaries, presided over by a judge who, at the start, has no knowledge of the investigative background of the case. In the final analysis the adversarial system of criminal procedure symbolizes and regularizes punitive combat. By contrast, the inquisitorial system begins historically where the adversarial system stopped its development. It is two historical steps removed from the system of private vengeance. From the standpoint of legal anthropology, then, it is historically superior to the adversarial system. Under the inquisitorial system, the public prosecutor has the duty to investigate not just on behalf of society but also on behalf of the defendant. Additionally, the public prosecutor has the duty to present the court not only evidence that would convict the defendant, but also evidence that could prove the defendant's innocence. The system mandates that both parties permit full pretrial discovery of the evidence in their possession. Finally, an aspect of the system that makes the trial less like a duel between two adversarial parties is that the inquisitorial system mandates that the judge take an active part in the conduct of the trial, with a role that is both directive and protective. Fact-finding is at the heart of the inquisitorial system. This system operates on the philosophical premise that in a criminal action the crucial factor is the body of facts, not the legal rule (in contrast to the adversarial system), and the goal of the entire procedure is to attempt to recreate, in the mind of the court, the commission of the alleged crime. Because of the inquisitorial system's thoroughness in conducting its pretrial investigation, it can be concluded that, if given the choice, a defendant who is innocent would prefer to be tried under the inquisitorial system, whereas a defendant who is guilty would prefer to be tried under the adversarial system.
199106_1-RC_3_19
[ "defendant to the courts", "victim to society", "defendant to the prosecutor", "courts to a law enforcement agency", "victim to the judge" ]
1
According to the passage, a central distinction between the system of private vengeance and the two modern criminal procedure systems was the shift in responsibility for initiating legal action against a criminal from the
There are two major systems of criminal procedure in the modern world—the adversarial and the inquisitorial. Both systems were historically preceded by the system of private vengeance in which the victim of a crime fashioned a remedy and administered it privately, either personally or through an agent. The modern adversarial system is only one historical step removed from the private vengeance system and still retains some of its characteristic features. For example, even though the right to initiate legal action against a criminal has now been extended to all members of society (as represented by the office of the public prosecutor), and even though the police department has effectively assumed the pretrial investigative functions on behalf of the prosecution, the adversarial system still leaves the defendant to conduct his or her own pretrial investigation. The trial is viewed as a forensic duel between two adversaries, presided over by a judge who, at the start, has no knowledge of the investigative background of the case. In the final analysis the adversarial system of criminal procedure symbolizes and regularizes punitive combat. By contrast, the inquisitorial system begins historically where the adversarial system stopped its development. It is two historical steps removed from the system of private vengeance. From the standpoint of legal anthropology, then, it is historically superior to the adversarial system. Under the inquisitorial system, the public prosecutor has the duty to investigate not just on behalf of society but also on behalf of the defendant. Additionally, the public prosecutor has the duty to present the court not only evidence that would convict the defendant, but also evidence that could prove the defendant's innocence. The system mandates that both parties permit full pretrial discovery of the evidence in their possession. Finally, an aspect of the system that makes the trial less like a duel between two adversarial parties is that the inquisitorial system mandates that the judge take an active part in the conduct of the trial, with a role that is both directive and protective. Fact-finding is at the heart of the inquisitorial system. This system operates on the philosophical premise that in a criminal action the crucial factor is the body of facts, not the legal rule (in contrast to the adversarial system), and the goal of the entire procedure is to attempt to recreate, in the mind of the court, the commission of the alleged crime. Because of the inquisitorial system's thoroughness in conducting its pretrial investigation, it can be concluded that, if given the choice, a defendant who is innocent would prefer to be tried under the inquisitorial system, whereas a defendant who is guilty would prefer to be tried under the adversarial system.
199106_1-RC_3_20
[ "It is based on cooperation rather than conflict.", "It encourages full disclosure of evidence.", "It requires that the judge play an active role in the conduct of the trial.", "It places the defendant in charge of his or her defense.", "It favors the innocent." ]
3
All of the following are characteristics of the inquisitorial system that the author cites EXCEPT:
There are two major systems of criminal procedure in the modern world—the adversarial and the inquisitorial. Both systems were historically preceded by the system of private vengeance in which the victim of a crime fashioned a remedy and administered it privately, either personally or through an agent. The modern adversarial system is only one historical step removed from the private vengeance system and still retains some of its characteristic features. For example, even though the right to initiate legal action against a criminal has now been extended to all members of society (as represented by the office of the public prosecutor), and even though the police department has effectively assumed the pretrial investigative functions on behalf of the prosecution, the adversarial system still leaves the defendant to conduct his or her own pretrial investigation. The trial is viewed as a forensic duel between two adversaries, presided over by a judge who, at the start, has no knowledge of the investigative background of the case. In the final analysis the adversarial system of criminal procedure symbolizes and regularizes punitive combat. By contrast, the inquisitorial system begins historically where the adversarial system stopped its development. It is two historical steps removed from the system of private vengeance. From the standpoint of legal anthropology, then, it is historically superior to the adversarial system. Under the inquisitorial system, the public prosecutor has the duty to investigate not just on behalf of society but also on behalf of the defendant. Additionally, the public prosecutor has the duty to present the court not only evidence that would convict the defendant, but also evidence that could prove the defendant's innocence. The system mandates that both parties permit full pretrial discovery of the evidence in their possession. Finally, an aspect of the system that makes the trial less like a duel between two adversarial parties is that the inquisitorial system mandates that the judge take an active part in the conduct of the trial, with a role that is both directive and protective. Fact-finding is at the heart of the inquisitorial system. This system operates on the philosophical premise that in a criminal action the crucial factor is the body of facts, not the legal rule (in contrast to the adversarial system), and the goal of the entire procedure is to attempt to recreate, in the mind of the court, the commission of the alleged crime. Because of the inquisitorial system's thoroughness in conducting its pretrial investigation, it can be concluded that, if given the choice, a defendant who is innocent would prefer to be tried under the inquisitorial system, whereas a defendant who is guilty would prefer to be tried under the adversarial system.
199106_1-RC_3_21
[ "doubtful that its judges can be both directive and protective", "satisfied that it has potential for uncovering the relevant facts in a case", "optimistic that it will replace the adversarial system", "wary about its down playing of legal rules", "critical of its close relationship with the private vengeance system" ]
1
The author's attitude toward the inquisitorial system can best be described as
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_22
[ "significant prestige and a title", "\"know-how\" in a particular field", "a long and difficult educational endeavor", "a commitment to political justice", "a public confession of devotion to a way of life" ]
4
According to the author, which one of the following is required in order that one be a professional?
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_23
[ "Medicine is defined as a profession because of the etymology of the word \"profession.\"", "It is a mistake to pay special honor to the knowledge and skills of physicians.", "The work of physicians is under attack only because it is widely misunderstood.", "The correct reason that physicians are professionals is that their work involves public commitment to a high good.", "Physicians have been encouraged to think of themselves as technicians and need to reorient themselves toward ethical concerns." ]
3
Which one of the following best expresses the main point made by the author in the passage?
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_24
[ "the author's belief that it is futile to resist the trend toward defining the physician's work as a trade", "the author's dislike of governmental regulation and consumer advocacy", "the author's inquiry into the nature of the practice of medicine", "the author's suggestions for rallying sensible people to a concentrated defense of physicians", "the author's fascination with the origins of words" ]
2
The question posed by the author in lines 7–10 of the passage introduces which one of the following?
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_25
[ "how society generally treats physicians", "that the practice of medicine is analogous to teaching", "that being a professional is in part a public act", "the specific knowledge on which trades are based", "how a livelihood is different from a profession" ]
3
In the passage, the author mentions or suggests all of the following EXCEPT
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_26
[ "eager that the work of one group of professionals, physicians, be viewed from a new perspective", "sympathetic toward professionals who have become demoralized by public opinion", "surprised that professionals have been balked by governmental regulations and threats of litigation", "dismayed that most professionals have come to be considered technicians", "certain that professionals confess a commitment to ethical ideals" ]
4
The author's attitude towards professionals is best described as
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_27
[ "A skilled handicraft is a manual art acquired by habituation that enables tradespeople to tread regularly and reliably along the same path.", "Critics might argue that being a doctor, for example, requires no ethical or public act; thus medicine, as such, is morally neutral, does not bind character, and can be used for good or ill.", "Sometimes the pursuit of personal health competes with the pursuit of other goods, and it has always been the task of the community to order and define the competing ends.", "Not least among the myriad confusions and uncertainties of our time are those attending efforts to discern and articulate the essential characteristics of the medical profession.", "When, in contrast, we come to physicians of the whole body, we come tacitly acknowledging the meaning of illness and its potential threat to all that we hold dear." ]
1
Based on the information in the passage, it can be inferred that which one of the following would most logically begin a paragraph immediately following the passage?
Outside the medical profession, there are various efforts to cut medicine down to size: not only widespread malpractice litigation and massive governmental regulation, but also attempts by consumer groups and others to redefine medicine as a trade rather than as a profession, and the physician as merely a technician for hire under contract. Why should physicians (or indeed all sensible people) resist such efforts to give the practice of medicine a new meaning? We can gain some illumination from etymology. "Trade," from Germanic and Anglo-Saxon roots meaning "a course or pathway," has come to mean derivatively a habitual occupation and has been related to certain skills and crafts. On the other hand, while "profession" today also entails a habit of work, the word "profession" itself traces to an act of selfconscious and public—even confessional—speech. "To profess" preserves the meaning of its Latin source, "to declare publicly; to announce, affirm, avow." A profession is an activity or occupation to which its practitioner publicly professes, that is, confesses, devotion. But public announcement seems insufficient; publicly declaring devotion to plumbing or auto repair would not turn these trades into professions. Some believe that learning and knowledge are the diagnostic signs of a profession. For reasons probably linked to the medieval university, the term "profession" has been applied to the so-called learned professions—medicine, law, and theology—the practices of which are founded upon inquiry and knowledge rather than mere "know-how." Yet it is not only the pursuit and acquisition of knowledge that makes one a professional. The knowledge involved makes the profession one of the learned variety, but its professional quality is rooted in something else. Some mistakenly seek to locate that something else in the prestige and honor accorded professionals by society, evidenced in their special titles and the special deference and privileges they receive. But externalities do not constitute medicine a profession. Physicians are not professionals because they are honored; rather, they are honored because of their profession. Their titles and the respect they are shown superficially signify and acknowledge something deeper, that physicians are persons of the professional sort, knowingly and freely devoting themselves to a way of life worthy of such devotion. Just as lawyers devote themselves to rectifying injustices, looking up to what is lawful and right; just as teachers devote themselves to the education of the young, looking up to truth and wisdom; so physicians heal the sick, looking up to health and wholesomeness. Being a professional is thus rooted in our moral nature and in that which warrants and impels making a public confession to a way of life. Professing oneself a professional is an ethical act because it is not a silent and private act, but an articulated and public one; because it promises continuing devotion to a way of life, not merely announces a present preference or a way to a livelihood; because it is an activity in service to some high good that insists on devotion; because it is difficult and demanding. A profession engages one's character and heart, not merely one's mind and hands.
199106_1-RC_4_28
[ "The author locates the \"something else\" that truly constitutes a profession.", "The author dismisses efforts to redefine the meaning of the term \"profession.\"", "The author considers, and largely criticizes, several definitions of what constitutes a profession.", "The author clarifies the meaning of the term \"profession\" by advocating a return to its linguistic and historical roots.", "The author distinguishes trades such as plumbing and auto repair from professions such as medicine, law, and theology." ]
2
Which one of the following best describes the author's purpose in lines 18–42 of the passage?
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_1
[ "his exploitation of ambiguous and deceptive meanings", "his care and craft in composing poems", "his use of naming and enumeration", "his use of first-person narrative", "his strong religious beliefs" ]
2
The author mentions which one of the following as an example of the influence of Black folk culture on Hughes's poetry?
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_2
[ "evidence of his use of oral techniques in his poetry", "evidence of his thoughtful deliberation in composing his poems", "his scrupulous concern for representative details in his poetry", "his incorporation of Western European literary techniques in his poetry", "his engagement with social and political issues rather than aesthetic ones" ]
1
The author suggests that the "deceptive veil" (line 42) in Hughes's poetry obscures
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_3
[ "Its publication marked an advance in the intrinsic quality of African American art.", "It paved the way for publication of Hughes's The Weary Blues by making African American art fashionable.", "It was an authentic replication of African American spirituals and \"sorrow songs.\"", "It demonstrated the extent to which spirituals were adapted in order to make them more broadly accepted.", "It was to the spiritual what Hughes's The Weary Blues was to secular songs and stories." ]
3
With which one of the following statements regarding Jubilee Songs of the United States would the author be most likely to agree?
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_4
[ "indicate that modes of expression acceptable in the context of slavery in the South were acceptable only to a small number of White writers in the North after the Civil War", "contrast White writers' earlier appreciation of these songs with the growing tendency after the Civil War to regard Europeanized versions of the songs as more acceptable", "show that the requirement that such songs be Europeanized was internal to the African American tradition and was unrelated to the literary standards or attitudes of White writers", "demonstrate that such songs in their non-Europeanized form were more imaginative than Europeanized versions of the same songs", "suggest that White writers benefited more from exposure to African American art forms than Black writers did from exposure to European art forms" ]
1
The author most probably mentions the reactions of northern White writers to non-Europeanized "sorrow songs" in order to
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_5
[ "The requirement was imposed more for social than for aesthetic reasons.", "The requirement was a relatively unimportant aspect of the African American tradition.", "The requirement was the chief reason for Hughes's success as a writer.", "The requirement was appropriate for some forms of expression but not for others.", "The requirement was never as strong as it may have appeared to be." ]
0
The passage suggests that the author would be most likely to agree with which one of the following statements about the requirement that Black writers employ Western European literary techniques?
There is substantial evidence that by 1926, with the publication of The Weary Blues, Langston Hughes had broken with two well-established traditions in African American literature. In The Weary Blues, Hughes chose to modify the traditions that decreed that African American literature must promote racial acceptance and integration, and that, in order to do so, it must reflect an understanding and mastery of Western European literary techniques and styles. Necessarily excluded by this decree, linguistically and thematically, was the vast amount of secular folk material in the oral tradition that had been created by Black people in the years of slavery and after. It might be pointed out that even the spirituals or "sorrow songs" of the slaves—as distinct from their secular songs and stories—had been Europeanized to make them acceptable within these African American traditions after the Civil War. In 1862 northern White writers had commented favorably on the unique and provocative melodies of these "sorrow songs" when they first heard them sung by slaves in the Carolina sea islands. But by 1916, ten years before the publication of The Weary Blues, Harry T. Burleigh, the Black baritone soloist at New York's ultrafashionable Saint George's Episcopal Church, had published Jubilee Songs of the United States, with every spiritual arranged so that a concert singer could sing it "in the manner of an art song." Clearly, the artistic work of Black people could be used to promote racial acceptance and integration only on the condition that it became Europeanized. Even more than his rebellion against this restrictive tradition in African American art, Hughes's expression of the vibrant folk culture of Black people established his writing as a landmark in the history of African American literature. Most of his folk poems have the distinctive marks of this folk culture's oral tradition: they contain many instances of naming and enumeration, considerable hyperbole and understatement, and a strong infusion of street-talk rhyming. There is a deceptive veil of artlessness in these poems. Hughes prided himself on being an impromptu and impressionistic writer of poetry. His, he insisted, was not an artfully constructed poetry. Yet an analysis of his dramatic monologues and other poems reveals that his poetry was carefully and artfully crafted. In his folk poetry we find features common to all folk literature, such as dramatic ellipsis, narrative compression, rhythmic repetition, and monosyllabic emphasis. The peculiar mixture of irony and humor we find in his writing is a distinguishing feature of his folk poetry. Together, these aspects of Hughes's writing helped to modify the previous restrictions on the techniques and subject matter of Black writers and consequently to broaden the linguistic and thematic range of African American literature.
199110_1-RC_1_6
[ "its novelty compared to other works of African American literature", "its subtle understatement compared to that of other kinds of folk literature", "its virtuosity in adapting musical forms to language", "its expression of the folk culture of Black people", "its universality of appeal achieved through the adoption of colloquial expressions" ]
3
Which one of the following aspects of Hughes's poetry does the author appear to value most highly?
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_7
[ "During what period did the railroad reach the zenith of its popularity in the United States?", "How extensive was the impact of the railroad on the Industrial Revolution in the United States, relative to that of other modern innovations?", "Who are some of the writers of the 1830s who expressed ambivalence toward the railroad?", "In what way could Stilgoe have strengthened his argument regarding intellectuals' attitudes toward the railroad in the years after the 1880s?", "What arguments did the writers after the 1880s, as cited by Stilgoe, offer to justify their support for the railroad?" ]
4
The passage provides information to answer all of the following questions EXCEPT:
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_8
[ "the attitude of a minority of intellectuals toward technological innovation that began after 1830", "a commonly held attitude toward the railroad during the 1830s", "an ambivalent view of the railroad expressed by many poets and novelists between 1880 and 1930", "a critique of social and economic developments during the 1830s by a minority of intellectuals", "an attitude toward the railroad that was disseminated by works of popular culture after 1880" ]
1
According to the author of the passage, Stilgoe uses the phrase "romantic-era distrust" (line 13) to imply that the view he is referring to was
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_9
[ "influenced by the writings of Frank Norris, Eugene O'Neill, and Henry Adams", "similar to that of the minority of writers who had expressed ambivalence toward the railroad prior to the 1880s", "consistent with the public attitudes toward the railroad that were reflected in works of popular culture after the 1880s", "largely a reaction to the works of writers who had been severely critical of the railroad in the 1830s", "consistent with the prevailing attitude toward the railroad during the 1830s" ]
1
According to the author, the attitude toward the railroad that was reflected in writings of Henry James, Sinclair Lewis, and F. Scott Fitzgerald was
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_10
[ "work of a large group of writers that was published between 1880 and 1930 and that in Stilgoe's view was highly critical of the railroad", "work of writers who were heavily influenced by Hawthorne and Thoreau", "large volume of writing produced by Henry Adams, Sinclair Lewis, and Eugene O'Neill", "work of journalists, novelists, and illustrators who were responsible for creating enthusiasm for the railroad during the 1830s", "work of journalists, novelists, and illustrators that was published after 1880 and that has received little attention from scholars other than Stilgoe" ]
4
It can be inferred from the passage that the author uses the phrase "works of popular culture" (line 41) primarily to refer to the
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_11
[ "Their work never achieved broad popular appeal.", "Their ideas were disseminated to a large audience by the popular culture of the early 1800s.", "Their work expressed a more positive attitude toward the railroad than did that of Henry James, Sinclair Lewis, and F. Scott Fitzgerald.", "Although they were primarily novelists, some of their work could be classified as journalism.", "Although they were influenced by Thoreau, their attitude toward the railroad was significantly different from his." ]
2
Which one of the following can be inferred from the passage regarding the work of Frank Norris, Eugene O'Neill, and Henry Adams?
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_12
[ "It is impossible to know exactly what period historians are referring to when they use the term \"romantic era.\"", "The writing of intellectuals often anticipates ideas and movements that are later embraced by popular culture.", "Writers who were not popular in their own time tell us little about the age in which they lived.", "The works of popular culture can serve as a reliable indicator of public attitudes toward modern innovations like the railroad.", "The best source of information concerning the impact of an event as large as the Industrial Revolution is the private letters and journals of individuals." ]
3
It can be inferred from the passage that Stilgoe would be most likely to agree with which one of the following statements regarding the study of cultural history?
Historians generally agree that, of the great modern innovations, the railroad had the most far-reaching impact on major events in the United States in the nineteenth and early twentieth centuries, particularly on the Industrial Revolution. There is, however, considerable disagreement among cultural historians regarding public attitudes toward the railroad, both at its inception in the 1830s and during the half century between 1880 and 1930, when the national rail system was completed and reached the zenith of its popularity in the United States. In a recent book, John Stilgoe has addressed this issue by arguing that the "romantic-era distrust" of the railroad that he claims was present during the 1830s vanished in the decades after 1880. But the argument he provides in support of this position is unconvincing. What Stilgoe calls "romantic-era distrust" was in fact the reaction of a minority of writers, artists, and intellectuals who distrusted the railroad not so much for what it was as for what it signified. Thoreau and Hawthorne appreciated, even admired, an improved means of moving things and people from one place to another. What these writers and others were concerned about was not the new machinery as such, but the new kind of economy, social order, and culture that it prefigured. In addition, Stilgoe is wrong to imply that the critical attitude of these writers was typical of the period; their distrust was largely a reaction against the prevailing attitude in the 1830s that the railroad was an unqualified improvement. Stilgoe's assertion that the ambivalence toward the railroad exhibited by writers like Hawthorne and Thoreau disappeared after the 1880s is also misleading. In support of this thesis, Stilgoe has unearthed an impressive volume of material, the work of hitherto unknown illustrators, journalists, and novelists, all devotees of the railroad; but it is not clear what this new material proves except perhaps that the works of popular culture greatly expanded at the time. The volume of the material proves nothing if Stilgoe's point is that the earlier distrust of a minority of intellectuals did not endure beyond the 1880s, and, oddly, much of Stilgoe's other evidence indicates that it did. When he glances at the treatment of railroads by writers like Henry James, Sinclair Lewis, or F. Scott Fitzgerald, what comes through in spite of Stilgoe's analysis is remarkably like Thoreau's feeling of contrariety and ambivalence. (Had he looked at the work of Frank Norris, Eugene O'Neill, or Henry Adams, Stilgoe's case would have been much stronger.) The point is that the sharp contrast between the enthusiastic supporters of the railroad in the 1830s and the minority of intellectual dissenters during that period extended into the 1880s and beyond.
199110_1-RC_2_13
[ "evaluate one scholar's view of public attitudes toward the railroad in the United States from the early nineteenth to the early twentieth century", "review the treatment of the railroad in American literature of the nineteenth and twentieth centuries", "survey the views of cultural historians regarding the railroad's impact on major events in United States history", "explore the origins of the public support for the railroad that existed after the completion of a national rail system in the United States", "define what historians mean when they refer to the \"romantic-era distrust\" of the railroad" ]
0
The primary purpose of the passage is to
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_14
[ "illustrate an organism's functional adaptive response to changing environmental conditions", "prove that organisms can exhibit three basic adaptive responses to changing environmental conditions", "explain the differences in form and function between micropterous and macropterous water bugs and analyze the effect of environmental changes on each", "discuss three different types of adaptive responses and provide an example that explains how one of those types of responses works", "contrast acclimatory responses with developmental responses and suggest an explanation for the evolutionary purposes of these two responses to changing environmental conditions" ]
3
The primary purpose of the passage is to
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_15
[ "The number of developmental responses among the water-bug population would decrease.", "Both micropterous and macropterous water bugs would show an acclimatory response.", "The generation of water bugs to be hatched during the subsequent spring would contain an unusually large number of macropterous individuals.", "The dimorphism of the summer generation would enable some individuals to survive.", "The dimorphism of the summer generation would be genetically transferred to the next spring generation." ]
3
The passage supplies information to suggest that which one of the following would happen if a pond inhabited by water bugs were to dry up in June?
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_16
[ "eggs formed by water bugs in the autumn would probably produce a higher than usual proportion of macropterous individuals", "eggs formed by water bugs in the autumn would probably produce an entire summer generation of water bugs with smaller than normal wings", "eggs of the overwintering generation formed in the autumn would not be affected by this temperature change", "overwintering generation would not leave the ponds for the forest during the winter", "overwintering generation of water bugs would most likely form fewer eggs in the autumn and more in the spring" ]
0
It can be inferred from the passage that if the winter months of a particular year were unusually warm, the
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_17
[ "the overwintering generation forms two sets of eggs, one exposed to the colder temperatures of winter and one exposed only to the warmer temperatures of spring", "the eggs that produce micropterous and macropterous adults are morphologically different", "water bugs respond to seasonal changes by making an acclimatory functional adjustment in the wings", "water bugs hatching in the spring live out their life spans in ponds and never need to fly", "the overwintering generation, which produces eggs developing into the dimorphic generation, spends the winter in the forest and the spring in small ponds" ]
0
According to the passage, the dimorphic wing structure of the summer generation of water bugs occurs because
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_18
[ "thickening of the plumage of some birds in the autumn", "increase in pulse rate during vigorous exercise", "gradual darkening of the skin after exposure to sunlight", "gradual enlargement of muscles as a result of weight lifting", "development of a heavy fat layer in bears before hibernation" ]
1
It can be inferred from the passage that which one of the following is an example of a regulatory response?
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_19
[ "be made up of equal numbers of macropterous and micropterous individuals", "lay its eggs during the winter in order to expose them to cold", "show a marked inability to fly from one pond to another", "exhibit genetically determined differences in wing form from the early spring-hatched generation", "contain a much greater proportion of macropterous water bugs than the early spring-hatched generation" ]
4
According to the passage, the generation of water bugs hatching during the summer is likely to
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_20
[ "the function of the summer generation's dimorphism", "the irreversibility of most developmental adaptive responses in water bugs", "the effect of temperature on developing water-bug eggs", "the morphological difference between the summer generation and the overwintering generation of water bugs", "the functional adjustment of water bugs in response to seasonal temperature variation" ]
2
The author mentions laboratory experiments with adult water bugs (lines 63–66) in order to illustrate which one of the following?
Three basic adaptive responses—regulatory, acclimatory, and developmental—may occur in organisms as they react to changing environmental conditions. In all three, adjustment of biological features (morphological adjustment) or of their use (functional adjustment) may occur. Regulatory responses involve rapid changes in the organism's use of its physiological apparatus—increasing or decreasing the rates of various processes, for example. Acclimation involves morphological change—thickening of fur or red blood cell proliferation—which alters physiology itself. Such structural changes require more time than regulatory response changes. Regulatory and acclimatory responses are both reversible. Developmental responses, however, are usually permanent and irreversible; they become fixed in the course of the individual's development in response to environmental conditions at the time the response occurs. One such response occurs in many kinds of water bugs. Most water-bug species inhabiting small lakes and ponds have two generations per year. The first hatches during the spring, reproduces during the summer, then dies. The eggs laid in the summer hatch and develop into adults in late summer. They live over the winter before breeding in early spring. Individuals in the second (overwintering) generation have fully developed wings and leave the water in autumn to overwinter in forests, returning in spring to small bodies of water to lay eggs. Their wings are absolutely necessary for this seasonal dispersal. The summer (early) generation, in contrast, is usually dimorphic—some individuals have normal functional (macropterous) wings; others have much-reduced (micropterous) wings of no use for flight. The summer generation's dimorphism is a compromise strategy, for these individuals usually do not leave the ponds and thus generally have no use for fully developed wings. But small ponds occasionally dry up during the summer, forcing the water bugs to search for new habitats, an eventuality that macropterous individuals are well adapted to meet. The dimorphism of micropterous and macropterous individuals in the summer generation expresses developmental flexibility; it is not genetically determined. The individual's wing form is environmentally determined by the temperature to which developing eggs are exposed prior to their being laid. Eggs maintained in a warm environment always produce bugs with normal wings, but exposure to cold produces micropterous individuals. Eggs producing the overwintering brood are all formed during the late summer's warm temperatures. Hence, all individuals in the overwintering brood have normal wings. Eggs laid by the overwintering adults in the spring, which develop into the summer generation of adults, are formed in early autumn and early spring. Those eggs formed in autumn are exposed to cold winter temperatures, and thus produce micropterous adults in the summer generation. Those formed during the spring are never exposed to cold temperatures, and thus yield individuals with normal wings. Adult water bugs of the overwintering generation, brought into the laboratory during the cold months and kept warm, produce only macropterous offspring.
199110_1-RC_3_21
[ "Biological phenomena are presented, examples of their occurrence are compared and contrasted, and one particular example is illustrated in detail.", "A description of related biological phenomena is stated, and two of those phenomena are explained in detail with illustrated examples.", "Three related biological phenomena are described, a hypothesis explaining their relationship is presented, and supporting evidence is produced.", "Three complementary biological phenomena are explained, their causes are examined, and one of them is described by contrasting its causes with the other two.", "A new way of describing biological phenomena is suggested, its applications are presented, and one specific example is examined in detail." ]
0
Which one of the following best describes the organization of the passage?
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_22
[ "showing how the Vietnam conflict led to a new interpretation of the Constitution's provisions for use of the military", "arguing that the War Powers Resolution of 1973 is an attempt to reclaim a share of constitutionally concurrent power that had been usurped by the President", "outlining the history of the struggle between the President and Congress for control of the military", "providing examples of conflicts inherent in the Constitution's approach to a balance of powers", "explaining how the War Powers Resolution of 1973 alters the Constitution to eliminate an overlap of authority" ]
1
In the passage, the author is primarily concerned with
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_23
[ "assumes that the President and Congress will agree on whether troops should be used", "provides a clear-cut division of authority between the President and Congress in the decision to use troops", "assigns a greater role to the Congress than to the President in deciding whether troops should be used", "grants final authority to the President to decide whether to use troops", "intends that both the President and Congress should be involved in the decision to use troops" ]
4
With regard to the use of United States troops in a foreign conflict without a formal declaration of war by the United States, the author believes that the United States Constitution does which one of the following?
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_24
[ "a change in the attitude in Congress toward exercising its role in the use of armed forces", "the failure of Presidents to uphold commitments specified in defense treaties", "Congress's desire to be consulted concerning United States military actions instigated by the President", "the amount of money spent on recent conflicts waged without a declaration of war", "the number of lives lost in Vietnam" ]
1
The passage suggests that each of the following contributed to Congress's enacting the War Powers Resolution of 1973 EXCEPT
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_25
[ "Congress has enacted other laws that already set out presidential requirements for situations in which war has been declared", "by virtue of declaring war, Congress already implicitly participates in the decision to deploy troops", "the President generally receives broad public support during wars that have been formally declared by Congress", "Congress felt that the President should be allowed unlimited discretion in cases in which war has been declared", "the United States Constitution already explicitly defines the reporting and consulting requirements of the President in cases in which war has been declared" ]
1
It can be inferred from the passage that the War Powers Resolution of 1973 is applicable only in "the absence of a declaration of war" (lines 48–49) because
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_26
[ "is not in accord with the explicit roles of the President and Congress as defined in the Constitution", "interferes with the role of the President as commander in chief of the armed forces", "signals Congress's commitment to fulfill a role intended for it by the Constitution", "fails explicitly to address the use of armed forces in the absence of a declaration of war", "confirms the role historically assumed by Presidents" ]
2
It can be inferred from the passage that the author believes that the War Powers Resolution of 1973
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_27
[ "Because it was undertaken without the consent of Congress, it violated the intent and spirit of the Constitution.", "Because it galvanized support for the War Powers Resolution, it contributed indirectly to the expansion of presidential authority.", "Because it was necessitated by a defense treaty, it required the consent of Congress.", "It served as a precedent for a new interpretation of the constitutional limits on the President's authority to deploy troops.", "It differed from the actions of past Presidents in deploying United States troops in conflicts without a declaration of war by Congress." ]
0
It can be inferred from the passage that the author would be most likely to agree with which one of the following statements regarding the invasion of Cambodia?
The Constitution of the United States does not explicitly define the extent of the President's authority to involve United States troops in conflicts with other nations in the absence of a declaration of war. Instead, the question of the President's authority in this matter falls in the hazy area of concurrent power, where authority is not expressly allocated to either the President or the Congress. The Constitution gives Congress the basic power to declare war, as well as the authority to raise and support armies and a navy, enact regulations for the control of the military, and provide for the common defense. The President, on the other hand, in addition to being obligated to execute the laws of the land, including commitments negotiated by defense treaties, is named commander in chief of the armed forces and is empowered to appoint envoys and make treaties with the consent of the Senate. Although this allocation of powers does not expressly address the use of armed forces short of a declared war, the spirit of the Constitution at least requires that Congress should be involved in the decision to deploy troops, and in passing the War Powers Resolution of 1973, Congress has at last reclaimed a role in such decisions. Historically, United States Presidents have not waited for the approval of Congress before involving United States troops in conflicts in which a state of war was not declared. One scholar has identified 199 military engagements that occurred without the consent of Congress, ranging from Jefferson's conflict with the Barbary pirates to Nixon's invasion of Cambodia during the Vietnam conflict, which President Nixon argued was justified because his role as commander in chief allowed him almost unlimited discretion over the deployment of troops. However, the Vietnam conflict, never a declared war, represented a turning point in Congress's tolerance of presidential discretion in the deployment of troops in undeclared wars. Galvanized by the human and monetary cost of those hostilities and showing a new determination to fulfill its proper role, Congress enacted the War Powers Resolution of 1973, a statute designed to ensure that the collective judgment of both Congress and the President would be applied to the involvement of United States troops in foreign conflicts. The resolution required the President, in the absence of a declaration of war, to consult with Congress "in every possible instance" before introducing forces and to report to Congress within 48 hours after the forces have actually been deployed. Most important, the resolution allows Congress to veto the involvement once it begins, and requires the President, in most cases, to end the involvement within 60 days unless Congress specifically authorizes the military operation to continue. In its final section, by declaring that the resolution is not intended to alter the constitutional authority of either Congress or the President, the resolution asserts that congressional involvement in decisions to use armed force is in accord with the intent and spirit of the Constitution.
199110_1-RC_4_28
[ "request that Congress consider a formal declaration of war", "consult with the leaders of both houses of Congress before deploying armed forces", "desist from deploying any troops unless expressly approved by Congress", "report to Congress within 48 hours of the deployment of armed forces", "withdraw any armed forces deployed in such a conflict within 60 days unless war is declared" ]
3
According to the provisions of the War Powers Resolution of 1973 as described in the passage, if the President perceives that an international conflict warrants the immediate involvement of United States armed forces, the President is compelled in every instance to
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_1
[ "The observation of Herculina represented the crucial event that astronomical observers and theoreticians had been waiting for to establish a convincing case for the stability of asteroid satellite systems.", "Although astronomers long believed that observation supports the existence of stable asteroid-satellite systems, numerous recent reports have increased skepticism on this issue in astronomy.", "Theoreticians' views on the stability of asteroid satellite systems may be revised in the light of reports like those about Herculina.", "Astronomers continue to consider it respectable to doubt the stability of asteroid-satellite systems, but new theoretical developments may change their views.", "The Herculina event suggests that theoreticians' views about asteroid-satellite systems may be correct, and astronomers agree about the kind of evidence needed to clearly resolve the issue." ]
4
Which one of the following best expresses the main idea of the passage?
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_2
[ "the diameter of a body directly observed near Herculina", "the distance between Herculina and the planet nearest to it", "the shortest possible time in which satellites of Herculina, if any, could complete a single orbit", "the occultation that occurred shortly before the predicted occultation by Herculina", "the precise extent to which observed brightness dropped during the occultation by Herculina" ]
3
Which one of the following is mentioned in the passage as providing evidence that Herculina has a satellite?
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_3
[ "open-mindedness combined with a concern for rigorous standards of proof", "contempt for and impatience with the position held by theoreticians", "bemusement at a chaotic mix of theory, inadequate or spurious data, and calls for scientific rigor", "hardheaded skepticism, implying rejection of all data not recorded automatically by state-of-the-art instruments", "admiration for the methodical process by which science progresses from initial hypothesis to incontrovertible proof" ]
0
According to the passage, the attitude of astronomers toward asteroid satellites since the Herculina event can best be described as
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_4
[ "Since no good theoretical model existed, all claims that reports of secondary occultations were common were disputed.", "Some of the reported observations of secondary occultations were actually observations of collisions of satellites with one another.", "If there were observations of phenomena exactly like the phenomena now labeled secondary occultations, astronomers were less likely then to have reported such observations.", "The prevailing standards concerning what to classify as a well-behaved secondary event were less stringent than they are now.", "Astronomers were eager to publish their observations of occultations of stars by satellites of asteroids." ]
2
The author implies that which one of the following was true prior to reports of the Herculina event?
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_5
[ "The percentage of reports of primary occultations that also included reports of secondary occultations increased tenfold compared to the time before the Herculina event.", "Primary occultations by asteroids were reported to have been accompanied by secondary occultations in about one out of every thousand cases.", "The absolute number of reports of secondary occultations increased tenfold compared to the time before the Herculina event.", "Primary occultations by asteroids were reported to have been accompanied by secondary occultations in more than one out of every hundred cases.", "In more than one out of every hundred cases, primary occultations were reported to have been accompanied by more than one secondary occultation." ]
3
The information presented in the passage implies which one of the following about the frequency of reports of secondary occultations after the Herculina event?
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_6
[ "cast doubt on existing reports of secondary occultations of stars", "describe experimental efforts by astronomers to separate theoretically believable observations of satellites of asteroids from spurious ones", "review the development of ideas among astronomers about whether or not satellites of asteroids exist", "bring a theoretician's perspective to bear on an incomplete discussion of satellites of asteroids", "illustrate the limits of reasonable speculation concerning the occultation of stars" ]
2
The primary purpose of the passage is to
Until recently many astronomers believed that asteroids travel about the solar system unaccompanied by satellites. These astronomers assumed this because they considered asteroid-satellite systems inherently unstable. Theoreticians could have told them otherwise: even minuscule bodies in the solar system can theoretically have satellites, as long as everything is in proper scale. If a bowling ball were orbiting about the Sun in the asteroid belt, it could have a pebble orbiting it as far away as a few hundred radii (or about 50 meters) without losing the pebble to the Sun's gravitational pull. Observations now suggest that asteroid satellites may exist not only in theory but also in reality. Several astronomers have noticed, while watching asteroids pass briefly in front of stars, that something besides the known asteroid sometimes blocks out the star as well. Is that something a satellite? The most convincing such report concerns the asteroid Herculina, which was due to pass in front of a star in 1978. Astronomers waiting for the predicted event found not just one occultation, or eclipse, of the star, but two distinct drops in brightness. One was the predicted occultation, exactly on time. The other, lasting about five seconds, preceded the predicted event by about two minutes. The presence of a secondary body near Herculina thus seemed strongly indicated. To cause the secondary occultation, an unseen satellite would have to be about 45 kilometers in diameter, a quarter of the size of Herculina, and at a distance of 990 kilometers from the asteroid at the time. These values are within theoretical bounds, and such an asteroid-satellite pair could be stable. With the Herculina event, apparent secondary occultations became "respectable" —and more commonly reported. In fact, so common did reports of secondary events become that they are now simply too numerous for all of them to be accurate. Even if every asteroid has as many satellites as can be fitted around it without an undue number of collisions, only one in every hundred primary occultations would be accompanied by a secondary event (one in every thousand if asteroidal satellite systems resembled those of the planets). Yet even astronomers who find the case for asteroid satellites unconvincing at present say they would change their minds if a photoelectric record were made of a well-behaved secondary event. By "well-behaved" they mean that during occultation the observed brightness must drop sharply as the star winks out and must rise sharply as it reappears from behind the obstructing object, but the brightness during the secondary occultation must drop to that of the asteroid, no higher and no lower. This would make it extremely unlikely that an airplane or a glitch in the instruments was masquerading as an occulting body.
199112_3-RC_1_7
[ "a review of pre-1978 reports of secondary occultations", "an improved theoretical model of stable satellite systems", "a photoelectric record of a well-behaved secondary occultation", "a more stringent definition of what constitutes a well-behaved secondary occultation", "a powerful telescope that would permit a comparison of ground-based observations with those made from airplanes" ]
2
The passage suggests that which one of the following would most help to resolve the question of whether asteroids have satellites?
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_8
[ "Seventeenth-century scientific experimentation would have been impossible without the work of paid laboratory technicians.", "Seventeenth-century social conventions prohibited upper-class laboratory workers from taking public credit for their work.", "Seventeenth-century views of scientific discovery combined with social class distinctions to ensure that laboratory technicians' scientific work was never publicly acknowledged.", "Seventeenth-century scientists were far more dependent on their laboratory technicians than are scientists today, yet far less willing to acknowledge technicians' scientific contributions.", "Seventeenth-century scientists liberated themselves from the stigma attached to manual labor by relying heavily on the work of laboratory technicians." ]
2
Which one of the following best summarizes the main idea of the passage?
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_9
[ "Unlike many seventeenth-century scientists, Boyle recognized that most scientific discoveries resulted from the cooperative efforts of many individuals.", "Unlike many seventeenth-century scientists, Boyle maintained a deeply rooted and pervasive contempt for manual labor.", "Unlike many seventeenth-century scientists, Boyle was a member of the Royal Society of London.", "Boyle generously acknowledged the contribution of the technicians who worked in his laboratory.", "Boyle himself performed the actual labor of obtaining and recording experimental results." ]
4
It can be inferred from the passage that the "seventeenth-century rhetoric" mentioned in line 6 would have more accurately described the experimentation performed in Boyle's laboratory if which one of the following were true?
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_10
[ "their interests were adequately represented by their employers", "their education was inadequate to make informed political decisions", "the independence of their political judgment would be compromised by their economic dependence on their employers", "their participation in the elections would be a polarizing influence on the political process", "the manual labor that they performed did not constitute a contribution to the society that was sufficient to justify their participation in elections" ]
2
According to the author, servants in seventeenth-century England were excluded from the franchise because of the belief that
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_11
[ "belief that the primary purpose of scientific discovery was to reveal the divine truth that could be found in nature", "view that scientific knowledge results largely from the insights of a few brilliant individuals rather than from the cooperative efforts of many workers", "seventeenth-century belief that servants should be denied the right to vote because they were dependent on wages paid to them by their employers", "traditional disdain for manual labor that was maintained by most members of the English upper class during the seventeenth century", "idea that the search for scientific truth was a sign of piety" ]
3
According to the author, the Royal Society of London insisted that scientists abandon the
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_12
[ "Individual insights rather than cooperative endeavors produce most scientific discoveries.", "How science is practiced is significantly influenced by the political beliefs and assumptions of scientists.", "Scientific research undertaken for pay cannot be considered objective.", "Scientific discovery can reveal divine truth in nature.", "Scientific discovery often relies on the unacknowledged contributions of laboratory technicians." ]
0
The author implies that which one of the following beliefs was held in both the seventeenth and the twentieth centuries?
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_13
[ "Several alternative answers are presented to a question posed in the previous paragraph, and the last is adopted as the most plausible.", "A question regarding the cause of the phenomenon described in the previous paragraph is posed, two possible explanations are rejected, and evidence is provided in support of a third.", "A question regarding the phenomenon described in the previous paragraph is posed, and several incompatible views are presented.", "A question regarding the cause of the phenomenon described in the previous paragraph is posed, and several contributing factors are then discussed.", "Several possible answers to a question are evaluated in light of recent discoveries cited earlier in the passage." ]
3
Which one of the following best describes the organization of the last paragraph?
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_14
[ "place the failure of seventeenth-century scientists to acknowledge the contributions of their technicians in the larger context of relations between workers and their employers in seventeenth-century England", "provide evidence in support of the author's more general thesis regarding the relationship of scientific discovery to the economic conditions of societies in which it takes place", "provide evidence in support of the author's explanation of why scientists in seventeenth-century England were reluctant to rely on their technicians for the performance of anything but the most menial tasks", "illustrate political and economic changes in the society of seventeenth-century England that had a profound impact on how scientific research was conducted", "undermine the view that scientific discovery results from individual enterprise rather than from the collective endeavor of many workers" ]
0
The author's discussion of the political significance of the "wage relationship" (line 48) serves to
Historians attempting to explain how scientific work was done in the laboratory of the seventeenth-century chemist and natural philosopher Robert Boyle must address a fundamental discrepancy between how such experimentation was actually performed and the seventeenth-century rhetoric describing it. Leaders of the new Royal Society of London in the 1660s insisted that authentic science depended upon actual experiments performed, observed, and recorded by the scientists themselves. Rejecting the traditional contempt for manual operations, these scientists, all members of the English upper class, were not to think themselves demeaned by the mucking about with chemicals, furnaces, and pumps; rather, the willingness of each of them to become, as Boyle himself said, a mere "drudge" and "under-builder" in the search for God's truth in nature was taken as a sign of their nobility and Christian piety. This rhetoric has been so effective that one modern historian assures us that Boyle himself actually performed all of the thousand or more experiments he reported. In fact, due to poor eyesight, fragile health, and frequent absences from his laboratory, Boyle turned over much of the labor of obtaining and recording experimental results to paid technicians, although published accounts of the experiments rarely, if ever, acknowledged the technicians' contributions. Nor was Boyle unique in relying on technicians without publicly crediting their work. Why were the contributions of these technicians not recognized by their employers? One reason is the historical tendency, which has persisted into the twentieth century, to view scientific discovery as resulting from momentary flashes of individual insight rather than from extended periods of cooperative work by individuals with varying levels of knowledge and skill. Moreover, despite the clamor of seventeenth-century scientific rhetoric commending a hands-on approach, science was still overwhelmingly an activity of the English upper class, and the traditional contempt that genteel society maintained for manual labor was pervasive and deeply rooted. Finally, all of Boyle's technicians were "servants," which in seventeenth-century usage meant anyone who worked for pay. To seventeenth-century sensibilities, the wage relationship was charged with political significance. Servants, meaning wage earners, were excluded from the franchise because they were perceived as ultimately dependent on their wages and thus controlled by the will of their employers. Technicians remained invisible in the political economy of science for the same reasons that underlay servants' general political exclusion. The technicians' contributions, their observations and judgment, if acknowledged, would not have been perceived in the larger scientific community as objective because the technicians were dependent on the wages paid to them by their employers. Servants might have made the apparatus work, but their contributions to the making of scientific knowledge were largely—and conveniently—ignored by their employers.
199112_3-RC_2_15
[ "the claim that scientific discovery results largely from the insights of brilliant individuals working alone", "ridicule of scientists who were members of the English upper class and who were thought to demean themselves by engaging in the manual labor required by their experiments", "criticism of scientists who publicly acknowledged the contributions of their technicians", "assertions by members of the Royal Society of London that scientists themselves should be responsible for obtaining and recording experimental results", "the claim by Boyle and his colleagues that the primary reason for scientific research is to discover evidence of divine truth in the natural world" ]
3
It can be inferred from the passage that "the clamor of seventeenth-century scientific rhetoric" (lines 39–40) refers to
One type of violation of the antitrust laws is the abuse of monopoly power. Monopoly power is the ability of a firm to raise its prices above the competitive level-that is, above the level that would exist naturally if several firms had to compete-without driving away so many customers as to make the price increase unprofitable. In order to show that a firm has abused monopoly power, and thereby violated the antitrust laws, two essential facts must be established. First, a firm must be shown to possess monopoly power, and second, that power must have been used to exclude competition in the monopolized market or related markets. The price a firm may charge for its product is constrained by the availability of close substitutes for the product. If a firm attempts to charge a higher price-a supracompetitive price-customers will turn to other firms able to supply substitute products at competitive prices. If a firm provides a large percentage of the products actually or potentially available, however, customers may find it difficult to buy from alternative suppliers. Consequently, a firm with a large share of the relevant market of substitutable products may be able to raise its price without losing many customers. For this reason courts often use market share as a rough indicator of monopoly power. Supracompetitive prices are associated with a loss of consumers' welfare because such prices force some consumers to buy a less attractive mix of products than they would ordinarily buy. Supracompetitive prices, however, do not themselves constitute an abuse of monopoly power. Antitrust laws do not attempt to counter the mere existence of monopoly power, or even the use of monopoly power to extract extraordinarily high profits. For example, a firm enjoying economies of scale-that is, low unit production costs due to high volume-does not violate the antitrust laws when it obtains a large market share by charging prices that are profitable but so low that its smaller rivals cannot survive. If the antitrust laws posed disincentives to the existence and growth of such firms, the laws could impair consumers' welfare. Even if the firm, upon acquiring monopoly power, chose to raise prices in order to increase profits, it would not be in violation of the antitrust laws. The antitrust prohibitions focus instead on abuses of monopoly power that exclude competition in the monopolized market or involve leverage-the use of power in one market to reduce competition in another. One such forbidden practice is a tying arrangement, in which a monopolist conditions the sale of a product in one market on the buyer's purchase of another product in a different market. For example, a firm enjoying a monopoly in the communications systems market might not sell its products to a customer unless that customer also buys its computer systems, which are competing with other firms' computer systems. The focus on the abuse of monopoly power, rather than on monopoly itself, follows from the primary purpose of the antitrust laws: to promote consumers' welfare through assurance of the quality and quantity of products available to consumers.
199112_3-RC_3_16
[ "Monopoly power is assessed in terms of market share, whereas abuse of monopoly power is assessed in terms of market control.", "Monopoly power is easy to demonstrate, whereas abuse of monopoly power is difficult to demonstrate.", "Monopoly power involves only one market, whereas abuse of monopoly power involves at least two or more related markets.", "Monopoly power is the ability to charge supracompetitive prices, whereas abuse of monopoly power is the use of that ability.", "Monopoly power does not necessarily hurt consumer welfare, whereas abuse of monopoly power does." ]
4
Which one of the following distinctions between monopoly power and the abuse of monopoly power would the author say underlies the antitrust laws discussed in the passage?
One type of violation of the antitrust laws is the abuse of monopoly power. Monopoly power is the ability of a firm to raise its prices above the competitive level-that is, above the level that would exist naturally if several firms had to compete-without driving away so many customers as to make the price increase unprofitable. In order to show that a firm has abused monopoly power, and thereby violated the antitrust laws, two essential facts must be established. First, a firm must be shown to possess monopoly power, and second, that power must have been used to exclude competition in the monopolized market or related markets. The price a firm may charge for its product is constrained by the availability of close substitutes for the product. If a firm attempts to charge a higher price-a supracompetitive price-customers will turn to other firms able to supply substitute products at competitive prices. If a firm provides a large percentage of the products actually or potentially available, however, customers may find it difficult to buy from alternative suppliers. Consequently, a firm with a large share of the relevant market of substitutable products may be able to raise its price without losing many customers. For this reason courts often use market share as a rough indicator of monopoly power. Supracompetitive prices are associated with a loss of consumers' welfare because such prices force some consumers to buy a less attractive mix of products than they would ordinarily buy. Supracompetitive prices, however, do not themselves constitute an abuse of monopoly power. Antitrust laws do not attempt to counter the mere existence of monopoly power, or even the use of monopoly power to extract extraordinarily high profits. For example, a firm enjoying economies of scale-that is, low unit production costs due to high volume-does not violate the antitrust laws when it obtains a large market share by charging prices that are profitable but so low that its smaller rivals cannot survive. If the antitrust laws posed disincentives to the existence and growth of such firms, the laws could impair consumers' welfare. Even if the firm, upon acquiring monopoly power, chose to raise prices in order to increase profits, it would not be in violation of the antitrust laws. The antitrust prohibitions focus instead on abuses of monopoly power that exclude competition in the monopolized market or involve leverage-the use of power in one market to reduce competition in another. One such forbidden practice is a tying arrangement, in which a monopolist conditions the sale of a product in one market on the buyer's purchase of another product in a different market. For example, a firm enjoying a monopoly in the communications systems market might not sell its products to a customer unless that customer also buys its computer systems, which are competing with other firms' computer systems. The focus on the abuse of monopoly power, rather than on monopoly itself, follows from the primary purpose of the antitrust laws: to promote consumers' welfare through assurance of the quality and quantity of products available to consumers.
199112_3-RC_3_17
[ "No, because leverage involves a nonmonopolized market.", "No, unless the leverage involves a tying arrangement.", "Yes, because leverage is a characteristic of monopoly power.", "Yes, unless the firm using leverage is charging competitive prices.", "Yes, because leverage is used to eliminate competition in a related market." ]
4
Would the use of leverage meet the criteria for abuse of monopoly power outlined in the first paragraph?
One type of violation of the antitrust laws is the abuse of monopoly power. Monopoly power is the ability of a firm to raise its prices above the competitive level-that is, above the level that would exist naturally if several firms had to compete-without driving away so many customers as to make the price increase unprofitable. In order to show that a firm has abused monopoly power, and thereby violated the antitrust laws, two essential facts must be established. First, a firm must be shown to possess monopoly power, and second, that power must have been used to exclude competition in the monopolized market or related markets. The price a firm may charge for its product is constrained by the availability of close substitutes for the product. If a firm attempts to charge a higher price-a supracompetitive price-customers will turn to other firms able to supply substitute products at competitive prices. If a firm provides a large percentage of the products actually or potentially available, however, customers may find it difficult to buy from alternative suppliers. Consequently, a firm with a large share of the relevant market of substitutable products may be able to raise its price without losing many customers. For this reason courts often use market share as a rough indicator of monopoly power. Supracompetitive prices are associated with a loss of consumers' welfare because such prices force some consumers to buy a less attractive mix of products than they would ordinarily buy. Supracompetitive prices, however, do not themselves constitute an abuse of monopoly power. Antitrust laws do not attempt to counter the mere existence of monopoly power, or even the use of monopoly power to extract extraordinarily high profits. For example, a firm enjoying economies of scale-that is, low unit production costs due to high volume-does not violate the antitrust laws when it obtains a large market share by charging prices that are profitable but so low that its smaller rivals cannot survive. If the antitrust laws posed disincentives to the existence and growth of such firms, the laws could impair consumers' welfare. Even if the firm, upon acquiring monopoly power, chose to raise prices in order to increase profits, it would not be in violation of the antitrust laws. The antitrust prohibitions focus instead on abuses of monopoly power that exclude competition in the monopolized market or involve leverage-the use of power in one market to reduce competition in another. One such forbidden practice is a tying arrangement, in which a monopolist conditions the sale of a product in one market on the buyer's purchase of another product in a different market. For example, a firm enjoying a monopoly in the communications systems market might not sell its products to a customer unless that customer also buys its computer systems, which are competing with other firms' computer systems. The focus on the abuse of monopoly power, rather than on monopoly itself, follows from the primary purpose of the antitrust laws: to promote consumers' welfare through assurance of the quality and quantity of products available to consumers.
199112_3-RC_3_18
[ "to distinguish between supracompetitive prices and supracompetitive profits", "to describe the positive uses of monopoly power", "to introduce the concept of economies of scale", "to distinguish what is not covered by the antitrust laws under discussion from what is covered", "to remind the reader of the issue of consumers' welfare" ]
3
What is the main purpose of the third paragraph (lines 28–47)?
One type of violation of the antitrust laws is the abuse of monopoly power. Monopoly power is the ability of a firm to raise its prices above the competitive level-that is, above the level that would exist naturally if several firms had to compete-without driving away so many customers as to make the price increase unprofitable. In order to show that a firm has abused monopoly power, and thereby violated the antitrust laws, two essential facts must be established. First, a firm must be shown to possess monopoly power, and second, that power must have been used to exclude competition in the monopolized market or related markets. The price a firm may charge for its product is constrained by the availability of close substitutes for the product. If a firm attempts to charge a higher price-a supracompetitive price-customers will turn to other firms able to supply substitute products at competitive prices. If a firm provides a large percentage of the products actually or potentially available, however, customers may find it difficult to buy from alternative suppliers. Consequently, a firm with a large share of the relevant market of substitutable products may be able to raise its price without losing many customers. For this reason courts often use market share as a rough indicator of monopoly power. Supracompetitive prices are associated with a loss of consumers' welfare because such prices force some consumers to buy a less attractive mix of products than they would ordinarily buy. Supracompetitive prices, however, do not themselves constitute an abuse of monopoly power. Antitrust laws do not attempt to counter the mere existence of monopoly power, or even the use of monopoly power to extract extraordinarily high profits. For example, a firm enjoying economies of scale-that is, low unit production costs due to high volume-does not violate the antitrust laws when it obtains a large market share by charging prices that are profitable but so low that its smaller rivals cannot survive. If the antitrust laws posed disincentives to the existence and growth of such firms, the laws could impair consumers' welfare. Even if the firm, upon acquiring monopoly power, chose to raise prices in order to increase profits, it would not be in violation of the antitrust laws. The antitrust prohibitions focus instead on abuses of monopoly power that exclude competition in the monopolized market or involve leverage-the use of power in one market to reduce competition in another. One such forbidden practice is a tying arrangement, in which a monopolist conditions the sale of a product in one market on the buyer's purchase of another product in a different market. For example, a firm enjoying a monopoly in the communications systems market might not sell its products to a customer unless that customer also buys its computer systems, which are competing with other firms' computer systems. The focus on the abuse of monopoly power, rather than on monopoly itself, follows from the primary purpose of the antitrust laws: to promote consumers' welfare through assurance of the quality and quantity of products available to consumers.
199112_3-RC_3_19
[ "Competition is essential to consumers' welfare.", "There are acceptable and unacceptable ways for firms to reduce their competition.", "The preservation of competition is the principal aim of the antitrust laws.", "Supracompetitive prices lead to reductions in competition.", "Competition is necessary to ensure high-quality products at low prices." ]
1
Given only the information in the passage, with which one of the following statements about competition would those responsible for the antitrust laws most likely agree?
One type of violation of the antitrust laws is the abuse of monopoly power. Monopoly power is the ability of a firm to raise its prices above the competitive level-that is, above the level that would exist naturally if several firms had to compete-without driving away so many customers as to make the price increase unprofitable. In order to show that a firm has abused monopoly power, and thereby violated the antitrust laws, two essential facts must be established. First, a firm must be shown to possess monopoly power, and second, that power must have been used to exclude competition in the monopolized market or related markets. The price a firm may charge for its product is constrained by the availability of close substitutes for the product. If a firm attempts to charge a higher price-a supracompetitive price-customers will turn to other firms able to supply substitute products at competitive prices. If a firm provides a large percentage of the products actually or potentially available, however, customers may find it difficult to buy from alternative suppliers. Consequently, a firm with a large share of the relevant market of substitutable products may be able to raise its price without losing many customers. For this reason courts often use market share as a rough indicator of monopoly power. Supracompetitive prices are associated with a loss of consumers' welfare because such prices force some consumers to buy a less attractive mix of products than they would ordinarily buy. Supracompetitive prices, however, do not themselves constitute an abuse of monopoly power. Antitrust laws do not attempt to counter the mere existence of monopoly power, or even the use of monopoly power to extract extraordinarily high profits. For example, a firm enjoying economies of scale-that is, low unit production costs due to high volume-does not violate the antitrust laws when it obtains a large market share by charging prices that are profitable but so low that its smaller rivals cannot survive. If the antitrust laws posed disincentives to the existence and growth of such firms, the laws could impair consumers' welfare. Even if the firm, upon acquiring monopoly power, chose to raise prices in order to increase profits, it would not be in violation of the antitrust laws. The antitrust prohibitions focus instead on abuses of monopoly power that exclude competition in the monopolized market or involve leverage-the use of power in one market to reduce competition in another. One such forbidden practice is a tying arrangement, in which a monopolist conditions the sale of a product in one market on the buyer's purchase of another product in a different market. For example, a firm enjoying a monopoly in the communications systems market might not sell its products to a customer unless that customer also buys its computer systems, which are competing with other firms' computer systems. The focus on the abuse of monopoly power, rather than on monopoly itself, follows from the primary purpose of the antitrust laws: to promote consumers' welfare through assurance of the quality and quantity of products available to consumers.
199112_3-RC_3_20
[ "By limiting consumers' choices, abuse of monopoly power reduces consumers' welfare, but monopoly alone can sometimes actually operate in the consumers' best interests.", "What is needed now is a set of related laws to deal with the negative impacts that monopoly itself has on consumers' ability to purchase products at reasonable cost.", "Over time, the antitrust laws have been very effective in ensuring competition and, consequently, consumers' welfare in the volatile communications and computer systems industries.", "By controlling supracompetitive prices and corresponding supracompetitive profits, the antitrust laws have, indeed, gone a long way toward meeting that objective.", "As noted above, the necessary restraints on monopoly itself have been left to the market, where competitive prices and economies of scale are rewarded through increased market share." ]
0
Which one of the following sentences would best complete the last paragraph of the passage?
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_21
[ "the Navajo rejected the stylistic influences of Anglo culture", "Navajo weaving cannot be classified by Amsden's categories", "the Navajo changed their style of weaving because they sought the challenge of new artistic problems", "original motor habits and thought processes limit the extent to which a style can be revised", "the causal factors leading to the emergence of the bordered style are not as clear-cut as Amsden suggests" ]
4
The author's central thesis is that
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_22
[ "a sign of resistance to a change in style", "an echo of the diamond style", "a feature derived from Anglo culture", "an attempt to disintegrate the rigid form of the banded style", "a means of differentiating the top of the weaving from the bottom" ]
0
It can be inferred from the passage that Amsden views the use of "strips of color" (line 18) in the early bordered style as
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_23
[ "The appearance of the first trading post on the Navajo reservation coincided with the appearance of the diamond style.", "Traces of thought processes and motor habits of one culture can generally be found in the art of another culture occupying the same period and region.", "The bordered style may have developed gradually from the banded style as a result of Navajo experiments with design.", "The influence of Anglo culture was not the only non-Native American influence on Navajo weaving.", "Horizontal and vertical rows of diamond forms were transformed by the Navajos into solid lines to create the bordered style." ]
2
The author's view of Navajo weaving suggests which one of the following?
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_24
[ "repetition of forms", "overall patterns", "horizontal bands", "isolated figures", "use of color" ]
3
According to the passage, Navajo weavings made prior to 1890 typically were characterized by all of the following EXCEPT
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_25
[ "The styles of Navajo weaving changed in response to changes in Navajo motor habits and thought processes.", "The zigzag style was the result of stylistic influences from Anglo culture.", "Navajo weaving used isolated figures in the beginning, but combined naturalistic and abstract designs in later styles.", "Navajo weaving changed gradually from a style in which the entire surface was covered by horizontal bands to one in which central figures dominated the surface.", "The styles of Navajo weaving always contained some type of isolated figure." ]
3
The author would most probably agree with which one of the following conclusions about the stylistic development of Navajo weaving?
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_26
[ "conceived as a response to imagined correspondences between Anglo and Navajo art", "biased by Amsden's feelings about Anglo culture", "a result of Amsden's failing to take into account certain aspects of Navajo weaving", "based on a limited number of specimens of the styles of Navajo weaving", "based on a confusion between the stylistic features of the zigzag and diamond styles" ]
2
The author suggests that Amsden's claim that borders in Navajo weaving were inspired by Anglo culture could be
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_27
[ "establish the direct influence of Anglo culture on the bordered style", "cast doubts on the claim that the bordered style arose primarily from Anglo influence", "cite an example of a blanket with a central design and no border", "suggest that the Anglo influence produced significant changes in the two earliest styles of Navajo weaving", "illustrate how the Navajo had exhausted the stylistic possibilities of the diamond style" ]
1
The author most probably mentions the Chief White Antelope blanket in order to
Amsden has divided Navajo weaving into four distinct styles. He argues that three of them can be identified by the type of design used to form horizontal bands: colored stripes, zigzags, or diamonds. The fourth, or bordered, style he identifies by a distinct border surrounding centrally placed, dominating figures. Amsden believes that the diamond style appeared after 1869 when, under Anglo influence and encouragement, the blanket became a rug with larger designs and bolder lines. The bordered style appeared about 1890, and, Amsden argues, it reflects the greatest number of Anglo influences on the newly emerging rug business. The Anglo desire that anything with graphic designs have a top, bottom, and border is a cultural preference that the Navajo abhorred, as evidenced, he suggests, by the fact that in early bordered specimens strips of color unexpectedly break through the enclosing pattern. Amsden argues that the bordered rug represents a radical break with previous styles. He asserts that the border changed the artistic problem facing weavers: a blank area suggests the use of isolated figures, while traditional, banded Navajo designs were continuous and did not use isolated figures. The old patterns alternated horizontal decorative zones in a regular order. Amsden's view raises several questions. First, what is involved in altering artistic styles? Some studies suggest that artisans' motor habits and thought processes must be revised when a style changes precipitously. In the evolution of Navajo weaving, however, no radical revisions in the way articles are produced need be assumed. After all, all weaving subordinates design to the physical limitations created by the process of weaving, which includes creating an edge or border. The habits required to make decorative borders are, therefore, latent and easily brought to the surface. Second, is the relationship between the banded and bordered styles as simple as Amsden suggests? He assumes that a break in style is a break in psychology. But if style results from constant quests for invention, such stylistic breaks are inevitable. When a style has exhausted the possibilities inherent in its principles, artists cast about for new, but not necessarily alien, principles. Navajo weaving may have reached this turning point prior to 1890. Third, is there really a significant stylistic gap? Two other styles lie between the banded styles and the bordered style. They suggest that disintegration of the bands may have altered visual and motor habits and prepared the way for a border filled with separate units. In the Chief White Antelope blanket, dated prior to 1865, ten years before the first Anglo trading post on the Navajo reservation, whole and partial diamonds interrupt the flowing design and become separate forms. Parts of diamonds arranged vertically at each side may be seen to anticipate the border.
199112_3-RC_4_28
[ "comparing and contrasting different styles", "questioning a view of how a style came into being", "proposing alternate methods of investigating the evolution of styles", "discussing the influence of one culture on another", "analyzing the effect of the interaction between two different cultures" ]
1
The passage is primarily concerned with
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_1
[ "the nearest coastal nation regulated activities", "few controls or restrictions applied to ocean areas", "the ocean areas were used for only innocent purposes", "the freedom of the seas doctrine settled all claims concerning navigation and fishing", "broad authority over international waters was shared equally among all nations" ]
1
According to the passage, until the mid-twentieth century there were few jurisdictional disputes over international waters because
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_2
[ "formally censured by an international organization for not properly regulating marine activities", "called upon by other nations to establish rules to protect its territorial waters", "able but not required to place legal limits on such commercial activities", "allowed to resolve the problem at its own discretion providing it could contain the threat to its own territorial waters", "permitted to hold the commercial offenders liable only if they were citizens of that particular nation" ]
2
According to the international law doctrines applicable before the mid-twentieth century, if commercial activity within a particular nation's territorial waters threatened all marine life in those waters, the nation would have been
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_3
[ "managing ecosystems in either territorial or international waters was given low priority", "unlimited resources in international waters resulted in little interest in territorial waters", "nations considered it their responsibility to protect territorial but not international waters", "a nation's authority over its citizenry ended at territorial lines", "although nations could extend their territorial dominion beyond three miles from their shoreline, most chose not to do so" ]
0
The author suggests that, before the mid-twentieth century, most nations' actions with respect to territorial and international waters indicated that
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_4
[ "increased political pressure on individual nations to establish comprehensive laws regulating ocean resources", "a greater number of jurisdictional disputes among nations over the regulation of fishing on the open seas", "the opportunity for some nations to manage large ocean ecosystems", "a new awareness of the need to minimize pollution caused by navigation", "a political incentive for smaller nations to solve the problems of pollution in their coastal waters" ]
2
The author cites which one of the following as an effect of the extension of territorial waters beyond the three-mile limit?
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_5
[ "the waters appeared to be unpolluted and to contain unlimited resources", "the fishing industry would be adversely affected by such rules", "the size of the area that would be subject to such rules was insignificant", "the technology needed for pollution control and resource management did not exist", "there were few jurisdictional conflicts over nations' territorial waters" ]
2
According to the passage, before the middle of the twentieth century, nations failed to establish rules protecting their territorial waters because
The extent of a nation's power over its coastal ecosystems and the natural resources in its coastal waters has been defined by two international law doctrines: freedom of the seas and adjacent state sovereignty. Until the mid-twentieth century, most nations favored application of broad open-seas freedoms and limited sovereign rights over coastal waters. A nation had the right to include within its territorial dominion only a very narrow band of coastal waters (generally extending three miles from the shoreline), within which it had the authority, but not the responsibility, to regulate all activities. But, because this area of territorial dominion was so limited, most nations did not establish rules for management or protection of their territorial waters. Regardless of whether or not nations enforced regulations in their territorial waters, large ocean areas remained free of controls or restrictions. The citizens of all nations had the right to use these unrestricted ocean areas for any innocent purpose, including navigation and fishing. Except for controls over its own citizens, no nation had the responsibility, let alone the unilateral authority, to control such activities in international waters. And, since there were few standards of conduct that applied on the "open seas," there were few jurisdictional conflicts between nations. The lack of standards is traceable to popular perceptions held before the middle of this century. By and large, marine pollution was not perceived as a significant problem, in part because the adverse effect of coastal activities on ocean ecosystems was not widely recognized, and pollution caused by human activities was generally believed to be limited to that caused by navigation. Moreover, the freedom to fish, or overfish, was an essential element of the traditional legal doctrine of freedom of the seas that no maritime country wished to see limited. And finally, the technology that later allowed exploitation of other ocean resources, such as oil, did not yet exist. To date, controlling pollution and regulating ocean resources have still not been comprehensively addressed by law, but international law—established through the customs and practices of nations—does not preclude such efforts. And two recent developments may actually lead to future international rules providing for ecosystem management. First, the establishment of extensive fishery zones, extending territorial authority as far as 200 miles out from a country's coast, has provided the opportunity for nations individually to manage larger ecosystems. This opportunity, combined with national self-interest in maintaining fish populations, could lead nations to reevaluate policies for management of their fisheries and to address the problem of pollution in territorial waters. Second, the international community is beginning to understand the importance of preserving the resources and ecology of international waters and to show signs of accepting responsibility for doing so. As an international consensus regarding the need for comprehensive management of ocean resources develops, it will become more likely that international standards and policies for broader regulation of human activities that affect ocean ecosystems will be adopted and implemented.
199202_2-RC_1_6
[ "a chronology of the events that have led up to a present-day crisis", "a legal inquiry into the abuse of existing laws and the likelihood of reform", "a political analysis of the problems inherent in directing national attention to an international issue", "a historical analysis of a problem that requires international attention", "a proposal for adopting and implementing international standards to solve an ecological problem" ]
3
The passage as a whole can best be described as
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_7
[ "The reduction in biodiversity is an irreversible process that represents a setback both for science and for society as a whole.", "The material and cultural wealth of a nation are insignificant when compared with the country's biological wealth.", "The enormous diversity of life on Earth could not have come about without periodic extinctions that have conferred preeminence on one species at the expense of another.", "The human species is in the process of initiating a massive extinction episode that may make past episodes look minor by comparison.", "The current decline in species diversity is a human-induced tragedy of incalculable proportions that has potentially grave consequences for the human species." ]
4
Which one of the following best expresses the main idea of the passage?
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_8
[ "The number of fish in a lake declines abruptly as a result of water pollution, then makes a slow comeback after cleanup efforts and the passage of ordinances against dumping.", "The concentration of chlorine in the water supply of a large city fluctuates widely before stabilizing at a constant and safe level.", "An old-fashioned article of clothing goes in and out of style periodically as a result of features in fashion magazines and the popularity of certain period films.", "After valuable mineral deposits are discovered, the population of a geographic region booms, then levels off and begins to decrease at a slow and steady rate.", "The variety of styles stocked by a shoe store increases rapidly after the store opens, holds constant for many months, and then gradually creeps upward." ]
4
Which one of the following situations is most analogous to the history of global diversity summarized in lines 10-18 of the passage?
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_9
[ "It was the second most devastating extinction episode in history.", "It was the most devastating extinction episode up until that time.", "It was less devastating to species diversity than is the current biodiversity crisis.", "The rate of extinction among marine animal species as a result of the crisis did not approach 77 percent.", "The dinosaurs comprised the great majority of species that perished during the crisis." ]
3
The author suggests which one of the following about the Cretaceous crisis?
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_10
[ "a species that has become extinct through human activity", "the typical lack of foresight that has led to biogeographic disaster", "a marine animal species that survived the Permian extinctions", "a species that is a potential source of material wealth", "the kind of action that is necessary to reverse the decline in species diversity" ]
1
The author mentions the Nile perch in order to provide an example of
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_11
[ "hunting", "pollution", "deforestation", "the growth of human populations", "human-engineered changes in the environment" ]
1
All of the following are explicitly mentioned in the passage as contributing to the extinction of species EXCEPT
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_12
[ "Because we can readily assess the value of material and cultural wealth, we tend not to take them for granted.", "Just as the biota is a source of potential material wealth, it is an untapped source of cultural wealth as well.", "Some degree of material and cultural wealth may have to be sacrificed if we are to protect our biological heritage.", "Material and cultural wealth are of less value than biological wealth because they have evolved over a shorter period of time.", "Material wealth and biological wealth are interdependent in a way that material wealth and cultural wealth are not." ]
0
The passage suggests which one of the following about material and cultural wealth?
The human species came into being at the time of the greatest biological diversity in the history of the Earth. Today, as human populations expand and alter the natural environment, they are reducing biological diversity to its lowest level since the end of the Mesozoic era, 65 million years ago. The ultimate consequences of this biological collision are beyond calculation, but they are certain to be harmful. That, in essence, is the biodiversity crisis. The history of global diversity can be summarized as follows: after the initial flowering of multicellular animals, there was a swift rise in the number of species in early Paleozoic times (between 600 and 430 million years ago), then plateaulike stagnation for the remaining 200 million years of the Paleozoic era, and finally a slow but steady climb through the Mesozoic and Cenozoic eras to diversity's all-time high. This history suggests that biological diversity was hard won and a long time in coming. Furthermore, this pattern of increase was set back by five massive extinction episodes. The most recent of these, during the Cretaceous period, is by far the most famous, because it ended the age of the dinosaurs, conferred hegemony on the mammals, and ultimately made possible the ascendancy of the human species. But the Cretaceous crisis was minor compared with the Permian extinctions 240 million years ago, during which between 77 and 96 percent of marine animal species perished. It took 5 million years, well into Mesozoic times, for species diversity to begin a significant recovery. Within the past 10,000 years biological diversity has entered a wholly new era. Human activity has had a devastating effect on species diversity, and the rate of human-induced extinctions is accelerating. Half of the bird species of Polynesia have been eliminated through hunting and the destruction of native forests. Hundreds of fish species endemic to Lake Victoria are now threatened with extinction following the careless introduction of one species of fish, the Nile perch. The list of such biogeographic disasters is extensive. Because every species is unique and irreplaceable, the loss of biodiversity is the most profound process of environmental change. Its consequences are also the least predictable because the value of the Earth's biota (the fauna and flora collectively) remains largely unstudied and unappreciated; unlike material and cultural wealth, which we understand because they are the substance of our everyday lives, biological wealth is usually taken for granted. This is a serious strategic error, one that will be increasingly regretted as time passes. The biota is not only part of a country's heritage, the product of millions of years of evolution centered on that place; it is also a potential source for immense untapped material wealth in the form of food, medicine, and other commercially important substances.
199202_2-RC_2_13
[ "The loss of species diversity will have as immediate an impact on the material wealth of nations as on their biological wealth.", "The crisis will likely end the hegemony of the human race and bring about the ascendancy of another species.", "The effects of the loss of species diversity will be dire, but we cannot yet tell how dire.", "It is more fruitful to discuss the consequences of the crisis in terms of the potential loss to humanity than in strictly biological terms.", "The consequences of the crisis can be minimized, but the pace of extinctions cannot be reversed." ]
2
The author would be most likely to agree with which one of the following statements about the consequences of the biodiversity crisis?
Women's participation in the revolutionary events in France between 1789 and 1795 has only recently been given nuanced treatment. Early twentieth-century historians of the French Revolution are typified by Jaures, who, though sympathetic to the women's movement of his own time, never even mentions its antecedents in revolutionary France. Even today most general histories treat only cursorily a few individual women, like Marie Antoinette. The recent studies by Landes, Badinter, Godineau, and Roudinesco, however, should signal a much-needed reassessment of women's participation. Godineau and Roudinesco point to three significant phases in that participation. The first, up to mid-1792, involved those women who wrote political tracts. Typical of their orientation to theoretical issues—in Godineau's view, without practical effect—is Marie Gouze's Declaration of the Rights of Women. The emergence of vocal middle-class women's political clubs marks the second phase. Formed in 1791 as adjuncts of middle-class male political clubs, and originally philanthropic in function, by late 1792 independent clubs of women began to advocate military participation for women. In the final phase, the famine of 1795 occasioned a mass women's movement: women seized food supplies, held officials hostage, and argued for the implementation of democratic politics. This phase ended in May of 1795 with the military suppression of this multiclass movement. In all three phases women's participation in politics contrasted markedly with their participation before 1789. Before that date some noblewomen participated indirectly in elections, but such participation by more than a narrow range of the population—women or men—came only with the Revolution. What makes the recent studies particularly compelling, however, is not so much their organization of chronology as their unflinching willingness to confront the reasons for the collapse of the women's movement. For Landes and Badinter, the necessity of women's having to speak in the established vocabularies of certain intellectual and political traditions diminished the ability of the women's movement to resist suppression. Many women, and many men, they argue, located their vision within the confining tradition of Jean-Jacques Rousseau, who linked male and female roles with public and private spheres respectively. But, when women went on to make political alliances with radical Jacobin men, Badinter asserts, they adopted a vocabulary and a violently extremist viewpoint that unfortunately was even more damaging to their political interests. Each of these scholars has a different political agenda and takes a different approach—Godineau, for example, works with police archives while Roudinesco uses explanatory schema from modern psychology. Yet, admirably, each gives center stage to a group that previously has been marginalized, or at best undifferentiated, by historians. And in the case of Landes and Badinter, the reader is left with a sobering awareness of the cost to the women of the Revolution of speaking in borrowed voices.
199202_2-RC_3_14
[ "According to recent historical studies, the participation of women in the revolutionary events of 1789–1795 can most profitably be viewed in three successive stages.", "The findings of certain recent historical studies have resulted from an earlier general reassessment, by historians, of women's participation in the revolutionary events of 1789–1795.", "Adopting the vocabulary and viewpoint of certain intellectual and political traditions resulted in no political advantage for women in France in the years 1789–1795.", "Certain recent historical studies have provided a much-needed description and evaluation of the evolving roles of women in the revolutionary events of 1789–1795.", "Historical studies that seek to explain the limitations of the women's movement in France during the years 1789–1795 are much more convincing than are those that seek only to describe the general features of that movement." ]
3
Which one of the following best states the main point of the passage?
Women's participation in the revolutionary events in France between 1789 and 1795 has only recently been given nuanced treatment. Early twentieth-century historians of the French Revolution are typified by Jaures, who, though sympathetic to the women's movement of his own time, never even mentions its antecedents in revolutionary France. Even today most general histories treat only cursorily a few individual women, like Marie Antoinette. The recent studies by Landes, Badinter, Godineau, and Roudinesco, however, should signal a much-needed reassessment of women's participation. Godineau and Roudinesco point to three significant phases in that participation. The first, up to mid-1792, involved those women who wrote political tracts. Typical of their orientation to theoretical issues—in Godineau's view, without practical effect—is Marie Gouze's Declaration of the Rights of Women. The emergence of vocal middle-class women's political clubs marks the second phase. Formed in 1791 as adjuncts of middle-class male political clubs, and originally philanthropic in function, by late 1792 independent clubs of women began to advocate military participation for women. In the final phase, the famine of 1795 occasioned a mass women's movement: women seized food supplies, held officials hostage, and argued for the implementation of democratic politics. This phase ended in May of 1795 with the military suppression of this multiclass movement. In all three phases women's participation in politics contrasted markedly with their participation before 1789. Before that date some noblewomen participated indirectly in elections, but such participation by more than a narrow range of the population—women or men—came only with the Revolution. What makes the recent studies particularly compelling, however, is not so much their organization of chronology as their unflinching willingness to confront the reasons for the collapse of the women's movement. For Landes and Badinter, the necessity of women's having to speak in the established vocabularies of certain intellectual and political traditions diminished the ability of the women's movement to resist suppression. Many women, and many men, they argue, located their vision within the confining tradition of Jean-Jacques Rousseau, who linked male and female roles with public and private spheres respectively. But, when women went on to make political alliances with radical Jacobin men, Badinter asserts, they adopted a vocabulary and a violently extremist viewpoint that unfortunately was even more damaging to their political interests. Each of these scholars has a different political agenda and takes a different approach—Godineau, for example, works with police archives while Roudinesco uses explanatory schema from modern psychology. Yet, admirably, each gives center stage to a group that previously has been marginalized, or at best undifferentiated, by historians. And in the case of Landes and Badinter, the reader is left with a sobering awareness of the cost to the women of the Revolution of speaking in borrowed voices.
199202_2-RC_3_15
[ "This work was not understood by many of Gouze's contemporaries.", "This work indirectly inspired the formation of independent women's political clubs.", "This work had little impact on the world of political action.", "This work was the most compelling produced by a French woman between 1789 and 1792.", "This work is typical of the kind of writing French women produced between 1793 and 1795." ]
2
The passage suggests that Godineau would be likely to agree with which one of the following statements about Marie Gouze's Declaration of the Rights of Women?
Women's participation in the revolutionary events in France between 1789 and 1795 has only recently been given nuanced treatment. Early twentieth-century historians of the French Revolution are typified by Jaures, who, though sympathetic to the women's movement of his own time, never even mentions its antecedents in revolutionary France. Even today most general histories treat only cursorily a few individual women, like Marie Antoinette. The recent studies by Landes, Badinter, Godineau, and Roudinesco, however, should signal a much-needed reassessment of women's participation. Godineau and Roudinesco point to three significant phases in that participation. The first, up to mid-1792, involved those women who wrote political tracts. Typical of their orientation to theoretical issues—in Godineau's view, without practical effect—is Marie Gouze's Declaration of the Rights of Women. The emergence of vocal middle-class women's political clubs marks the second phase. Formed in 1791 as adjuncts of middle-class male political clubs, and originally philanthropic in function, by late 1792 independent clubs of women began to advocate military participation for women. In the final phase, the famine of 1795 occasioned a mass women's movement: women seized food supplies, held officials hostage, and argued for the implementation of democratic politics. This phase ended in May of 1795 with the military suppression of this multiclass movement. In all three phases women's participation in politics contrasted markedly with their participation before 1789. Before that date some noblewomen participated indirectly in elections, but such participation by more than a narrow range of the population—women or men—came only with the Revolution. What makes the recent studies particularly compelling, however, is not so much their organization of chronology as their unflinching willingness to confront the reasons for the collapse of the women's movement. For Landes and Badinter, the necessity of women's having to speak in the established vocabularies of certain intellectual and political traditions diminished the ability of the women's movement to resist suppression. Many women, and many men, they argue, located their vision within the confining tradition of Jean-Jacques Rousseau, who linked male and female roles with public and private spheres respectively. But, when women went on to make political alliances with radical Jacobin men, Badinter asserts, they adopted a vocabulary and a violently extremist viewpoint that unfortunately was even more damaging to their political interests. Each of these scholars has a different political agenda and takes a different approach—Godineau, for example, works with police archives while Roudinesco uses explanatory schema from modern psychology. Yet, admirably, each gives center stage to a group that previously has been marginalized, or at best undifferentiated, by historians. And in the case of Landes and Badinter, the reader is left with a sobering awareness of the cost to the women of the Revolution of speaking in borrowed voices.
199202_2-RC_3_16
[ "These clubs fostered a mass women's movement.", "These clubs eventually developed a purpose different from their original purpose.", "These clubs were founded to advocate military participation for women.", "These clubs counteracted the original purpose of male political clubs.", "These clubs lost their direction by the time of the famine of 1795." ]
1
According to the passage, which one of the following is a true statement about the purpose of the women's political clubs mentioned in line 20?