context
stringclasses
269 values
id_string
stringlengths
15
16
answers
sequencelengths
5
5
label
int64
0
4
question
stringlengths
34
417
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator.
200212_3-RC_4_27
[ "It is unlikely that every instance of digitalization could be detected under a copyright law revised to criminalize digitalization.", "Criminalizing unauthorized digitalization appears to be consistent with the publishing community's treatment of information as an owned commodity.", "When copyright law is revised to cover digitalization, the revised law will include a prohibition on making copies from an unauthorized digitalization of a copyrighted work.", "The number of instances of unauthorized digitalization would likely rise if digitalization technology were made even easier to use.", "Under current law, many academics are allowed to make copies of copyrighted works as long as they are used only for private research." ]
2
The passage supports each of the following inferences EXCEPT:
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator.
200212_3-RC_4_28
[ "Unauthorized digitalization of a copyrighted work should be considered a crime except when it is done for purposes of private study or research.", "Unauthorized digitalization of a copyrighted work should be considered a crime even when it is done for purposes of private study or research.", "Making a copy of a copyrighted work from an unauthorized digitalization of the work should not be considered a crime.", "Making a copy of a copyrighted work from an unauthorized digitalization of the work should be punished, but not as severely as making the original digitalization.", "Making a copy of a copyrighted work from an unauthorized digitalization of the work should be punished just as severely as making the original digitalization." ]
0
Which one of the following views can most reasonably be attributed to the experts cited in line 32?
Social scientists have traditionally defined multipolar international systems as consisting of three or more nations, each of roughly equal military and economic strength. Theoretically, the members of such systems create shifting, temporary alliances in response to changing circumstances in the international environment. Such systems are, thus, fluid and flexible. Frequent, small confrontations are one attribute of multipolar systems and are usually the result of less powerful members grouping together to counter threats from larger, more aggressive members seeking hegemony. Yet the constant and inevitable counterbalancing typical of such systems usually results in stability. The best-known example of a multipolar system is the Concert of Europe, which coincided with general peace on that continent lasting roughly 100 years beginning around 1815. Bipolar systems, on the other hand, involve two major members of roughly equal military and economic strength vying for power and advantage. Other members of lesser strength tend to coalesce around one or the other pole. Such systems tend to be rigid and fixed, in part due to the existence of only one axis of power. Zero-sum political and military maneuverings, in which a gain for one side results in an equivalent loss for the other, are a salient feature of bipolar systems. Overall superiority is sought by both major members, which can lead to frequent confrontations, debilitating armed conflict, and, eventually, to the capitulation of one or the other side. Athens and Sparta of ancient Greece had a bipolar relationship, as did the United States and the USSR during the Cold War. However, the shift in the geopolitical landscape following the end of the Cold War calls for a reassessment of the assumptions underlying these two theoretical concepts. The emerging but still vague multipolar system in Europe today brings with it the unsettling prospect of new conflicts and shifting alliances that may lead to a diminution, rather than an enhancement, of security. The frequent, small confrontations that are thought to have kept the Concert of Europe in a state of equilibrium would today, as nations arm themselves with modern weapons, create instability that could destroy the system. And the larger number of members and shifting alliance patterns peculiar to multipolar systems would create a bewildering tangle of conflicts. This reassessment may also lead us to look at the Cold War in a new light. In 1914 smaller members of the multipolar system in Europe brought the larger members into a war that engulfed the continent. The aftermath—a crippled system in which certain members were dismantled, punished, or voluntarily withdrew—created the conditions that led to World War II. In contrast, the principal attributes of bipolar systems—two major members with only one possible axis of conflict locked in a rigid yet usually stable struggle for power—may have created the necessary parameters for general peace in the second half of the twentieth century.
200306_4-RC_1_1
[ "Peace can be maintained in Europe only if a new bipolar system emerges to replace Cold War alliances.", "All kinds of international systems discussed by social scientists carry within themselves the seeds of their own collapse and ultimately endanger international order.", "The current European geopolitical landscape is a multipolar system that strongly resembles the Concert of Europe which existed through most of the nineteenth century.", "Multipolarity fostered the conditions that led to World War II and is incompatible with a stable, modern Europe.", "The characterization of multipolar systems as stable and bipolar systems as open to debilitating conflict needs to be reconsidered in light of the realities of post-Cold War Europe." ]
4
Which one of the following most accurately expresses the main point of the passage?
Social scientists have traditionally defined multipolar international systems as consisting of three or more nations, each of roughly equal military and economic strength. Theoretically, the members of such systems create shifting, temporary alliances in response to changing circumstances in the international environment. Such systems are, thus, fluid and flexible. Frequent, small confrontations are one attribute of multipolar systems and are usually the result of less powerful members grouping together to counter threats from larger, more aggressive members seeking hegemony. Yet the constant and inevitable counterbalancing typical of such systems usually results in stability. The best-known example of a multipolar system is the Concert of Europe, which coincided with general peace on that continent lasting roughly 100 years beginning around 1815. Bipolar systems, on the other hand, involve two major members of roughly equal military and economic strength vying for power and advantage. Other members of lesser strength tend to coalesce around one or the other pole. Such systems tend to be rigid and fixed, in part due to the existence of only one axis of power. Zero-sum political and military maneuverings, in which a gain for one side results in an equivalent loss for the other, are a salient feature of bipolar systems. Overall superiority is sought by both major members, which can lead to frequent confrontations, debilitating armed conflict, and, eventually, to the capitulation of one or the other side. Athens and Sparta of ancient Greece had a bipolar relationship, as did the United States and the USSR during the Cold War. However, the shift in the geopolitical landscape following the end of the Cold War calls for a reassessment of the assumptions underlying these two theoretical concepts. The emerging but still vague multipolar system in Europe today brings with it the unsettling prospect of new conflicts and shifting alliances that may lead to a diminution, rather than an enhancement, of security. The frequent, small confrontations that are thought to have kept the Concert of Europe in a state of equilibrium would today, as nations arm themselves with modern weapons, create instability that could destroy the system. And the larger number of members and shifting alliance patterns peculiar to multipolar systems would create a bewildering tangle of conflicts. This reassessment may also lead us to look at the Cold War in a new light. In 1914 smaller members of the multipolar system in Europe brought the larger members into a war that engulfed the continent. The aftermath—a crippled system in which certain members were dismantled, punished, or voluntarily withdrew—created the conditions that led to World War II. In contrast, the principal attributes of bipolar systems—two major members with only one possible axis of conflict locked in a rigid yet usually stable struggle for power—may have created the necessary parameters for general peace in the second half of the twentieth century.
200306_4-RC_1_2
[ "The weaknesses of both types of systems are discussed in the context of twentieth-century European history.", "A prediction is made regarding European security based on the attributes of both types of systems.", "A new argument is introduced in favor of European countries embracing a new bipolar system.", "Twentieth-century European history is used to expand on the argument in the previous paragraph.", "The typical characteristics of the major members of a bipolar system are reviewed." ]
3
Which one of the following statements most accurately describes the function of the final paragraph?
Social scientists have traditionally defined multipolar international systems as consisting of three or more nations, each of roughly equal military and economic strength. Theoretically, the members of such systems create shifting, temporary alliances in response to changing circumstances in the international environment. Such systems are, thus, fluid and flexible. Frequent, small confrontations are one attribute of multipolar systems and are usually the result of less powerful members grouping together to counter threats from larger, more aggressive members seeking hegemony. Yet the constant and inevitable counterbalancing typical of such systems usually results in stability. The best-known example of a multipolar system is the Concert of Europe, which coincided with general peace on that continent lasting roughly 100 years beginning around 1815. Bipolar systems, on the other hand, involve two major members of roughly equal military and economic strength vying for power and advantage. Other members of lesser strength tend to coalesce around one or the other pole. Such systems tend to be rigid and fixed, in part due to the existence of only one axis of power. Zero-sum political and military maneuverings, in which a gain for one side results in an equivalent loss for the other, are a salient feature of bipolar systems. Overall superiority is sought by both major members, which can lead to frequent confrontations, debilitating armed conflict, and, eventually, to the capitulation of one or the other side. Athens and Sparta of ancient Greece had a bipolar relationship, as did the United States and the USSR during the Cold War. However, the shift in the geopolitical landscape following the end of the Cold War calls for a reassessment of the assumptions underlying these two theoretical concepts. The emerging but still vague multipolar system in Europe today brings with it the unsettling prospect of new conflicts and shifting alliances that may lead to a diminution, rather than an enhancement, of security. The frequent, small confrontations that are thought to have kept the Concert of Europe in a state of equilibrium would today, as nations arm themselves with modern weapons, create instability that could destroy the system. And the larger number of members and shifting alliance patterns peculiar to multipolar systems would create a bewildering tangle of conflicts. This reassessment may also lead us to look at the Cold War in a new light. In 1914 smaller members of the multipolar system in Europe brought the larger members into a war that engulfed the continent. The aftermath—a crippled system in which certain members were dismantled, punished, or voluntarily withdrew—created the conditions that led to World War II. In contrast, the principal attributes of bipolar systems—two major members with only one possible axis of conflict locked in a rigid yet usually stable struggle for power—may have created the necessary parameters for general peace in the second half of the twentieth century.
200306_4-RC_1_3
[ "indicate that bipolar systems can have certain unstable characteristics", "illustrate how multipolar systems can transform themselves into bipolar systems", "contrast the aggressive nature of bipolar members with the more rational behavior of their multipolar counterparts", "indicate the anarchic nature of international relations", "suggest that military and economic strength shifts in bipolar as frequently as in multipolar systems" ]
0
The author's reference to the possibility that confrontations may lead to capitulation (lines 27–30) serves primarily to
Social scientists have traditionally defined multipolar international systems as consisting of three or more nations, each of roughly equal military and economic strength. Theoretically, the members of such systems create shifting, temporary alliances in response to changing circumstances in the international environment. Such systems are, thus, fluid and flexible. Frequent, small confrontations are one attribute of multipolar systems and are usually the result of less powerful members grouping together to counter threats from larger, more aggressive members seeking hegemony. Yet the constant and inevitable counterbalancing typical of such systems usually results in stability. The best-known example of a multipolar system is the Concert of Europe, which coincided with general peace on that continent lasting roughly 100 years beginning around 1815. Bipolar systems, on the other hand, involve two major members of roughly equal military and economic strength vying for power and advantage. Other members of lesser strength tend to coalesce around one or the other pole. Such systems tend to be rigid and fixed, in part due to the existence of only one axis of power. Zero-sum political and military maneuverings, in which a gain for one side results in an equivalent loss for the other, are a salient feature of bipolar systems. Overall superiority is sought by both major members, which can lead to frequent confrontations, debilitating armed conflict, and, eventually, to the capitulation of one or the other side. Athens and Sparta of ancient Greece had a bipolar relationship, as did the United States and the USSR during the Cold War. However, the shift in the geopolitical landscape following the end of the Cold War calls for a reassessment of the assumptions underlying these two theoretical concepts. The emerging but still vague multipolar system in Europe today brings with it the unsettling prospect of new conflicts and shifting alliances that may lead to a diminution, rather than an enhancement, of security. The frequent, small confrontations that are thought to have kept the Concert of Europe in a state of equilibrium would today, as nations arm themselves with modern weapons, create instability that could destroy the system. And the larger number of members and shifting alliance patterns peculiar to multipolar systems would create a bewildering tangle of conflicts. This reassessment may also lead us to look at the Cold War in a new light. In 1914 smaller members of the multipolar system in Europe brought the larger members into a war that engulfed the continent. The aftermath—a crippled system in which certain members were dismantled, punished, or voluntarily withdrew—created the conditions that led to World War II. In contrast, the principal attributes of bipolar systems—two major members with only one possible axis of conflict locked in a rigid yet usually stable struggle for power—may have created the necessary parameters for general peace in the second half of the twentieth century.
200306_4-RC_1_4
[ "fearful that European geopolitics may bring about a similar bipolar system", "surprised that it did not end with a major war", "convinced that it provides an important example of bipolarity maintaining peace", "regretful that the major European countries were so ambivalent about it", "confident it will mark only a brief hiatus between long periods of European multipolarity" ]
2
With respect to the Cold War, the author's attitude can most accurately be described as
Social scientists have traditionally defined multipolar international systems as consisting of three or more nations, each of roughly equal military and economic strength. Theoretically, the members of such systems create shifting, temporary alliances in response to changing circumstances in the international environment. Such systems are, thus, fluid and flexible. Frequent, small confrontations are one attribute of multipolar systems and are usually the result of less powerful members grouping together to counter threats from larger, more aggressive members seeking hegemony. Yet the constant and inevitable counterbalancing typical of such systems usually results in stability. The best-known example of a multipolar system is the Concert of Europe, which coincided with general peace on that continent lasting roughly 100 years beginning around 1815. Bipolar systems, on the other hand, involve two major members of roughly equal military and economic strength vying for power and advantage. Other members of lesser strength tend to coalesce around one or the other pole. Such systems tend to be rigid and fixed, in part due to the existence of only one axis of power. Zero-sum political and military maneuverings, in which a gain for one side results in an equivalent loss for the other, are a salient feature of bipolar systems. Overall superiority is sought by both major members, which can lead to frequent confrontations, debilitating armed conflict, and, eventually, to the capitulation of one or the other side. Athens and Sparta of ancient Greece had a bipolar relationship, as did the United States and the USSR during the Cold War. However, the shift in the geopolitical landscape following the end of the Cold War calls for a reassessment of the assumptions underlying these two theoretical concepts. The emerging but still vague multipolar system in Europe today brings with it the unsettling prospect of new conflicts and shifting alliances that may lead to a diminution, rather than an enhancement, of security. The frequent, small confrontations that are thought to have kept the Concert of Europe in a state of equilibrium would today, as nations arm themselves with modern weapons, create instability that could destroy the system. And the larger number of members and shifting alliance patterns peculiar to multipolar systems would create a bewildering tangle of conflicts. This reassessment may also lead us to look at the Cold War in a new light. In 1914 smaller members of the multipolar system in Europe brought the larger members into a war that engulfed the continent. The aftermath—a crippled system in which certain members were dismantled, punished, or voluntarily withdrew—created the conditions that led to World War II. In contrast, the principal attributes of bipolar systems—two major members with only one possible axis of conflict locked in a rigid yet usually stable struggle for power—may have created the necessary parameters for general peace in the second half of the twentieth century.
200306_4-RC_1_5
[ "Each of the many small confrontations that occurred under the Concert of Europe threatened the integrity of the system.", "It provided the highest level of security possible for Europe in the late nineteenth century.", "All the factors contributing to stability during the late nineteenth century continue to contribute to European security.", "Equilibrium in the system was maintained as members grouped together to counterbalance mutual threats.", "It was more stable than most multipolar systems because its smaller members reacted promptly to aggression by its larger members." ]
3
Which one of the following statements concerning the Concert of Europe (lines 14–17) can most reasonably be inferred from the passage?
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_6
[ "argue that Latin American poets originated modernism and the avant-garde", "explain how Spanish poetry and Latin American poetry differ in their attitudes toward the Spanish language", "demonstrate why Latin American poetry is not well received in Spain", "show that the Castilian Spanish employed in Spanish poetry has remained relatively unchanged by the advent of modernism and the avant-garde", "illustrate the extent to which Spanish poetry romanticizes Spanish-language culture" ]
1
The discussion in the second paragraph is intended primarily to
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_7
[ "A family moves its restaurant to a new town and incorporates local ingredients into its traditional recipes.", "A family moves its business to a new town after the business fails in its original location.", "A family with a two-hundred-year-old house labors industriously in order to restore the house to its original appearance.", "A family does research into its ancestry in order to construct its family tree.", "A family eagerly anticipates its annual vacation but never takes photographs or purchases souvenirs to preserve its memories." ]
0
Given the information in the passage, which one of the following is most analogous to the evolution of Latin American poetry?
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_8
[ "Spanish linguistic constructs had greater influence on Latin American poets than had previously been thought.", "Castilian Spanish was still evolving linguistically at the time of the inception of Latin American poetry.", "Spanish poets originated an influential literary movement that used language in radical ways.", "Castilian Spanish was influenced during its evolution by other Spanish dialects.", "Spanish poets rejected the English and French incarnations of modernism." ]
2
The passage's claims about Spanish poetry would be most weakened if new evidence indicating which one of the following were discovered?
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_9
[ "The first haiku in the Spanish language were written by a Latin American poet.", "Spanish poetry is rarely innovative or experimental in its use of language.", "Spanish poetry rarely incorporates poetic traditions from other cultures.", "Latin American poetry tends to take the Spanish language for granted.", "Latin American poetry incorporates aspects of various other languages." ]
3
The passage affirms each of the following EXCEPT:
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_10
[ "The use of poetic structures from other world cultures is an attempt by Latin American poets to create a cultural past.", "The use of poetic structures from other world cultures by Latin American poets is a response to their lack of a long-standing Spanish-language cultural past in the Americas.", "The use of poetic structures from other world cultures has led Latin American poets to reconsider their lack of a long-standing Spanish-language cultural past in the Americas.", "Latin American poets who write about a world before recorded history do not use poetic structures from other world cultures.", "Latin American poetry does not borrow poetic structures from other world cultures whose literature exhibits cultural conservatism." ]
1
Which one of the following can most reasonably be inferred from the passage about Latin American poetry's use of poetic structures from other world cultures?
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_11
[ "This relationship has inspired Spanish poets to examine their cultural past with a critical eye.", "This relationship forces Spanish poets to write about subjects with which they feel little natural affinity.", "This relationship is itself the central theme of much Spanish poetry.", "This relationship infuses Spanish poetry with a romanticism that is reluctant to embrace the modern era.", "This relationship results in poems that are of little interest to contemporary Spanish readers." ]
3
Based on the passage, the author most likely holds which one of the following views toward Spanish poetry's relationship to the Spanish cultural past?
In spite of a shared language, Latin American poetry written in Spanish differs from Spanish poetry in many respects. The Spanish of Latin American poets is more open than that of Spanish poets, more exposed to outside influences—indigenous, English, French, and other languages. While some literary critics maintain that there is as much linguistic unity in Latin American poetry as there is in Spanish poetry, they base this claim on the fact that Castilian Spanish, the official and literary version of the Spanish language based largely on the dialect originally spoken in the Castile region of Spain, was transplanted to the Americas when it was already a relatively standardized idiom. Although such unity may have characterized the earliest Latin American poetry, after centuries in the Americas the language of Latin American poetry cannot help but reveal the influences of its unique cultural history. Latin American poetry is critical or irreverent in its attitude toward language, where that of Spanish poets is more accepting. For example, the Spanish-language incarnations of modernism and the avant-garde, two literary movements that used language in innovative and challenging ways, originated with Latin American poets. By contrast, when these movements later reached Spain, Spanish poets greeted them with reluctance. Spanish poets, even those of the modern era, seem to take their language for granted, rarely using it in radical or experimental ways. The most distinctive note in Latin American poetry is its enthusiastic response to the modern world, while Spanish poetry displays a kind of cultural conservatism—the desire to return to an ideal culture of the distant past. Because no Spanish-language culture lies in the equally distant (i.e., pre-Columbian) past of the Americas, but has instead been invented by Latin Americans day by day, Latin American poetry has no such long-standing past to romanticize. Instead, Latin American poetry often displays a curiosity about the literature of other cultures, an interest in exploring poetic structures beyond those typical of Spanish poetry. For example, the first Spanish-language haiku—a Japanese poetic form—were written by José Juan Tablada, a Mexican. Another of the Latin American poets' responses to this absence is the search for a world before recorded history—not only that of Spain or the Americas, but in some cases of the planet; the Chilean poet Pablo Neruda's work, for example, is noteworthy for its development of an ahistorical mythology for the creation of the earth. For Latin American poets there is no such thing as the pristine cultural past affirmed in the poetry of Spain: there is only the fluid interaction of all world cultures, or else the extensive time before cultures began.
200306_4-RC_2_12
[ "A tradition of cultural conservatism has allowed the Spanish language to evolve into a stable, reliable form of expression.", "It was only recently that Latin American poetry began to incorporate elements of other languages.", "The cultural conservatism of Spanish poetry is exemplified by the uncritical attitude of Spanish poets toward the Spanish language.", "Latin American poets' interest in other world cultures is illustrated by their use of Japanese words and phrases.", "Spanish poetry is receptive to the influence of some Spanish-language poets outside of Spain." ]
2
Which one of the following inferences is most supported by the passage?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_13
[ "Although cosmologists believe that the universe is shaped by gravitation, the total amount of observable matter in the universe is greatly insufficient to account for the gravitation that would be required to cause the universe to be organized into galaxies.", "Given their inability to account for more than 20 percent of the universe's \"missing\" mass, scientists are beginning to speculate that our current understanding of gravity is significantly mistaken.", "Indirect evidence suggesting that neutrinos have mass may allow neutrinos to account for up to 20 percent of dark matter, a finding that could someday be extended to a complete solution of the dark matter problem.", "After much speculation, researchers have discovered that neutrinos oscillate from one type into another as they travel through space, a phenomenon that proves that neutrinos have mass.", "Although it has been established that neutrinos have mass, such mass does not support the speculation of cosmologists that neutrinos constitute a portion of the universe's \"missing\" mass." ]
2
Which one of the following most accurately expresses the main idea of the passage?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_14
[ "\"The Existence of Dark Matter: Arguments For and Against\"", "\"Neutrinos and the Dark Matter Problem: A Partial Solution?\"", "\"Too Little, Too Late: Why Neutrinos Do Not Constitute Dark Matter\"", "\"The Role of Gravity: How Dark Matter Shapes Stars\"", "\"The Implications of Oscillation: Do Neutrinos Really Have Mass?\"" ]
1
Which one of the following titles most completely and accurately expresses the contents of the passage?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_15
[ "Observable matter constitutes at least 90 percent of the mass of the universe.", "Current theories are incapable of identifying the force that causes all particles in the universe to attract one another.", "The key to the problem of dark matter is determining the exact mass of a neutrino.", "It is unlikely that any force other than gravitation will be required to account for the organization of the universe into galaxies.", "Neutrinos probably account for most of the universe's \"missing\" mass." ]
3
Based on the passage, the author most likely holds which one of the following views?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_16
[ "A child seeking information about how to play chess consults a family member and so learns of a book that will instruct her in the game.", "A child seeking to earn money by delivering papers is unable to earn enough money for a bicycle and so decides to buy a skateboard instead.", "A child hoping to get a dog for his birthday is initially disappointed when his parents bring home a cat but eventually learns to love the animal.", "A child seeking money to attend a movie is given some of the money by one of his siblings and so decides to go to each of his other siblings to ask for additional money.", "A child enjoys playing sports with the neighborhood children but her parents insist that she cannot participate until she has completed her household chores." ]
3
As described in the last paragraph of the passage, the cosmologists' approach to solving the dark matter problem is most analogous to which one of the following?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_17
[ "satisfied that it occurs and that it suggests that neutrinos have mass", "hopeful that it will be useful in discovering other forms of dark matter", "concerned that it is often misinterpreted to mean that neutrinos account for all of dark matter", "skeptical that it occurs until further research can be done", "convinced that it cannot occur outside an experimental setting" ]
0
The author's attitude toward oscillation can most accurately be characterized as being
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_18
[ "exert gravitational force", "form galactic structures", "oscillate into another type of matter", "become significantly more massive", "fuse to produce new particles" ]
1
Which one of the following phrases could replace the word "cohere" at line 30 without substantively altering the author's meaning?
According to the theory of gravitation, every particle of matter in the universe attracts every other particle with a force that increases as either the mass of the particles increases, or their proximity to one another increases, or both. Gravitation is believed to shape the structures of stars, galaxies, and the entire universe. But for decades cosmologists (scientists who study the universe) have attempted to account for the finding that at least 90 percent of the universe seems to be missing: that the total amount of observable matter—stars, dust, and miscellaneous debris—does not contain enough mass to explain why the universe is organized in the shape of galaxies and clusters of galaxies. To account for this discrepancy, cosmologists hypothesize that something else, which they call "dark matter," provides the gravitational force necessary to make the huge structures cohere What is dark matter? Numerous exotic entities have been postulated, but among the more attractive candidates—because they are known actually to exist—are neutrinos, elementary particles created as a by-product of nuclear fusion, radioactive decay, or catastrophic collisions between other particles. Neutrinos, which come in three types, are by far the most numerous kind of particle in the universe; however, they have long been assumed to have no mass. If so, that would disqualify them as dark matter. Without mass, matter cannot exert gravitational force; without such force, it cannot induce other matter to cohere. But new evidence suggests that a neutrino does have mass. This evidence came by way of research findings supporting the existence of a long-theorized but never observed phenomenon called oscillation, whereby each of the three neutrino types can change into one of the others as it travels through space. Researchers held that the transformation is possible only if neutrinos also have mass. They obtained experimental confirmation of the theory by generating one neutrino type and then finding evidence that it had oscillated into the predicted neutrino type. In the process, they were able to estimate the mass of a neutrino at from 0.5 to 5 electron volts. While slight, even the lowest estimate would yield a lot of mass given that neutrinos are so numerous, especially considering that neutrinos were previously assumed to have no mass. Still, even at the highest estimate, neutrinos could only account for about 20 percent of the universe's "missing" mass. Nevertheless, that is enough to alter our picture of the universe even if it does not account for all of dark matter. In fact, some cosmologists claim that this new evidence offers the best theoretical solution yet to the dark matter problem. If the evidence holds up, these cosmologists believe, it may add to our understanding of the role elementary particles play in holding the universe together.
200306_4-RC_3_19
[ "There are more neutrinos in the universe than there are non-neutrinos.", "Observable matter cannot exert enough gravitational force to account for the present structure of the universe.", "Scientific experiments support the theory of neutrino oscillation.", "Neutrinos likely cannot account for all of the universe's \"missing\" mass.", "Dark matter may account for a large portion of the universe's gravitational force." ]
0
The passage states each of the following EXCEPT:
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_20
[ "The unreliability of memories about incidental aspects of observed events makes eyewitness testimony especially questionable in cases in which the witness was not directly involved.", "Because of the nature of human memory storage and retrieval, the courtroom testimony of eyewitnesses may contain crucial inaccuracies due to leading questions asked prior to the courtroom appearance.", "Researchers are surprised to find that courtroom testimony is often dependent on suggestion to fill gaps left by insufficient attention to detail at the time that the incident in question occurred.", "Although judges can disallow leading questions from the courtroom, it is virtually impossible to prevent them from being used elsewhere, to the detriment of many cases.", "Stricter regulation should be placed on lawyers whose leading questions can corrupt witnesses' testimony by introducing inaccurate data prior to the witnesses' appearance in the courtroom." ]
1
Which one of the following most accurately expresses the main point of the passage?
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_21
[ "a policy ensuring that witnesses have extra time to answer questions concerning details that are tangential to their original experiences of events", "thorough revision of the criteria for determining which kinds of interrogation may be disallowed in courtroom testimony under the category of \"leading questions\"", "increased attention to the nuances of all witnesses' responses to courtroom questions, even those that are not leading questions", "extensive interviewing of witnesses by all lawyers for both sides of a case prior to those witnesses' courtroom appearance", "availability of accurate transcripts of all interrogations of witnesses that occurred prior to those witnesses' appearance in court" ]
4
It can be reasonably inferred from the passage that which one of the following, if it were effectively implemented, would most increase the justice system's ability to prevent leading questions from causing mistaken court decisions?
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_22
[ "They are integrated with current memories as support for those memories.", "They are stored tentatively as conjectural data that fade with time.", "They stay more vivid in memory than do previously stored memory data.", "They are reinterpreted so as to be compatible with the details already stored in memory.", "They are retained in memory even when they conflict with previously stored memory data." ]
0
Which one of the following is mentioned in the passage as a way in which new data suggested to a witness by a leading question are sometimes processed?
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_23
[ "For purposes of flavor and preservation, salt and vinegar are important additions to cucumbers during the process of pickling, but these purposes could be attained by adding other ingredients instead.", "For the purpose of adding a mild stimulant effect, caffeine is included in some types of carbonated drinks, but for the purposes of appealing to health-conscious consumers, some types of carbonated drinks are advertised as being caffeine-free.", "For purposes of flavor and tenderness, the skins of apples and some other fruits are removed during preparation for drying, but grape skins are an essential part of raisins, and thus grape skins are not removed.", "For purposes of flavor and appearance, wheat germ is not needed in flour and is usually removed during milling, but for purposes of nutrition, the germ is an important part of the grain.", "For purposes of texture and appearance, some fat may be removed from meat when it is ground into sausage, but the removal of fat is also important for purposes of health." ]
3
In discussing the tangential details of events, the passage contrasts their original significance to witnesses with their possible significance in the courtroom (lines 52–59). That contrast is most closely analogous to which one of the following?
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_24
[ "In witnessing what types of crimes are people especially likely to pay close attention to circumstantial details?", "Which aspects of courtroom interrogation cause witnesses to be especially reluctant to testify in extensive detail?", "Can the stress of having to testify in a courtroom situation affect the accuracy of memory storage and retrieval?", "Do different people tend to possess different capacities for remembering details accurately?", "When is it more likely that a detail of an observed event will be accurately remembered?" ]
4
Which one of the following questions is most directly answered by information in the passage?
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_25
[ "corroborates and adds detail to a claim made in the first paragraph", "provides examples illustrating the applications of a theory discussed in the first paragraph", "forms an argument in support of a proposal that is made in the final paragraph", "anticipates and provides grounds for the rejection of a theory alluded to by the author in the final paragraph", "explains how newly obtained data favor one of two traditional theories mentioned elsewhere in the second paragraph" ]
0
The second paragraph consists primarily of material that
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_26
[ "have produced some unexpected findings regarding the extent of human reliance on external verification of memory details", "shed new light on a longstanding procedural controversy in the law", "may be of theoretical interest despite their tentative nature and inconclusive findings", "provide insights into the origins of several disparate types of logically fallacious reasoning", "should be of more than abstract academic interest to the legal profession" ]
4
It can be most reasonably inferred from the passage that the author holds that the recent studies discussed in the passage
Leading questions—questions worded in such a way as to suggest a particular answer—can yield unreliable testimony either by design, as when a lawyer tries to trick a witness into affirming a particular version of the evidence of a case, or by accident, when a questioner unintentionally prejudices the witness's response. For this reason, a judge can disallow such questions in the courtroom interrogation of witnesses. But their exclusion from the courtroom by no means eliminates the remote effects of earlier leading questions on eyewitness testimony. Alarmingly, the beliefs about an event that a witness brings to the courtroom may often be adulterated by the effects of leading questions that were introduced intentionally or unintentionally by lawyers, police investigators, reporters, or others with whom the witness has already interacted. Recent studies have confirmed the ability of leading questions to alter the details of our memories and have led to a better understanding of how this process occurs and, perhaps, of the conditions that make for greater risks that an eyewitness's memories have been tainted by leading questions. These studies suggest that not all details of our experiences become clearly or stably stored in memory—only those to which we give adequate attention. Moreover, experimental evidence indicates that if subtly introduced new data involving remembered events do not actively conflict with our stored memory data, we tend to process such new data similarly whether they correspond to details as we remember them, or to gaps in those details. In the former case, we often retain the new data as a reinforcement of the corresponding aspect of the memory, and in the latter case, we often retain them as a construction to fill the corresponding gap. An eyewitness who is asked, prior to courtroom testimony, "How fast was the car going when it passed the stop sign?" may respond to the query about speed without addressing the question of the stop sign. But the "stop sign" datum has now been introduced, and when later recalled, perhaps during courtroom testimony, it may be processed as belonging to the original memory even if the witness actually saw no stop sign. The farther removed from the event, the greater the chance of a vague or incomplete recollection and the greater the likelihood of newly suggested information blending with original memories. Since we can be more easily misled with respect to fainter and more uncertain memories, tangential details are more apt to become constructed out of subsequently introduced information than are more central details. But what is tangential to a witness's original experience of an event may nevertheless be crucial to the courtroom issues that the witness's memories are supposed to resolve. For example, a perpetrator's shirt color or hairstyle might be tangential to one's shocked observance of an armed robbery, but later those factors might be crucial to establishing the identity of the perpetrator.
200306_4-RC_4_27
[ "The tendency of leading questions to cause unreliable courtroom testimony has no correlation with the extent to which witnesses are emotionally affected by the events that they have observed.", "Leading questions asked in the process of a courtroom examination of a witness are more likely to cause inaccurate testimony than are leading questions asked outside the courtroom.", "The memory processes by which newly introduced data tend to reinforce accurately remembered details of events are not relevant to explaining the effects of leading questions.", "The risk of testimony being inaccurate due to certain other factors tends to increase as an eyewitness's susceptibility to giving inaccurate testimony due to the effects of leading questions increases.", "The traditional grounds on which leading questions can be excluded from courtroom interrogation of witnesses have been called into question by the findings of recent studies." ]
3
Which one of the following can be most reasonably inferred from the information in the passage?
The accumulation of scientific knowledge regarding the environmental impact of oil well drilling in North America has tended to lag behind the actual drilling of oil wells. Most attempts to regulate the industry have relied on hindsight: the need for regulation becomes apparent only after undesirable events occur. The problems associated with oil wells' potential contamination of groundwater—fresh water within the earth that supplies wells and springs—provide a case in point. When commercial drilling for oil began in North America in the mid-nineteenth century, regulations reflected the industry's concern for the purity of the wells' oil. In 1893, for example, regulations were enacted specifying well construction requirements to protect oil and gas reserves from contamination by fresh water. Thousands of wells were drilled in such a way as to protect the oil, but no thought was given to the possibility that the groundwater itself might need protection until many drinking-water wells near the oil well sites began to produce unpotable, oil-contaminated water. The reason for this contamination was that groundwater is usually found in porous and permeable geologic formations near the earth's surface, whereas petroleum and unpotable saline water reservoirs are generally found in similar formations but at greater depths. Drilling a well creates a conduit connecting all the formations that it has penetrated. Consequently, without appropriate safeguards, wells that penetrate both groundwater and oil or saline water formations inevitably contaminate the groundwater. Initial attempts to prevent this contamination consisted of sealing off the groundwater formations with some form of protective barrier to prevent the oil flowing up the well from entering or mixing with the natural groundwater reservoir. This method, which is still in use today, initially involved using hollow trees to seal off the groundwater formations; now, however, large metal pipe casings, set in place with cement, are used. Regulations currently govern the kinds of casing and cement that can be used in these practices; however, the hazards of insufficient knowledge persist. For example, the long-term stability of this way of protecting groundwater is unknown. The protective barrier may fail due to corrosion of the casing by certain fluids flowing up the well, or because of dissolution of the cement by these fluids. The effects of groundwater bacteria, traffic vibrations, and changing groundwater chemistry are likewise unassessed. Further, there is no guarantee that wells drilled in compliance with existing regulations will not expose a need for research in additional areas: on the west coast of North America, a major disaster recently occurred because a well's location was based on a poor understanding of the area's subsurface geology. Because the well was drilled in a channel accessing the ocean, not only was the area's groundwater completely contaminated, but widespread coastal contamination also occurred, prompting international concern over oil exploration and initiating further attempts to refine regulations.
200406_1-RC_1_1
[ "Although now recognized as undesirable, occasional groundwater contamination by oil and unpotable saline water is considered to be inevitable wherever drilling for oil occurs.", "Widespread coastal contamination caused by oil well drilling in North America has prompted international concern over oil exploration.", "Hindsight has been the only reliable means available to regulation writers responsible for devising adequate safeguard regulations to prevent environmental contamination associated with oil well drilling.", "The risk of environmental contamination associated with oil well drilling continues to exist because safeguard regulations are often based on hindsight and less-than-sufficient scientific information.", "Groundwater contamination associated with oil well drilling is due in part to regulations designed to protect the oil from contamination by groundwater and not the groundwater from contamination by oil." ]
3
Which one of the following most accurately states the main point of the passage?
The accumulation of scientific knowledge regarding the environmental impact of oil well drilling in North America has tended to lag behind the actual drilling of oil wells. Most attempts to regulate the industry have relied on hindsight: the need for regulation becomes apparent only after undesirable events occur. The problems associated with oil wells' potential contamination of groundwater—fresh water within the earth that supplies wells and springs—provide a case in point. When commercial drilling for oil began in North America in the mid-nineteenth century, regulations reflected the industry's concern for the purity of the wells' oil. In 1893, for example, regulations were enacted specifying well construction requirements to protect oil and gas reserves from contamination by fresh water. Thousands of wells were drilled in such a way as to protect the oil, but no thought was given to the possibility that the groundwater itself might need protection until many drinking-water wells near the oil well sites began to produce unpotable, oil-contaminated water. The reason for this contamination was that groundwater is usually found in porous and permeable geologic formations near the earth's surface, whereas petroleum and unpotable saline water reservoirs are generally found in similar formations but at greater depths. Drilling a well creates a conduit connecting all the formations that it has penetrated. Consequently, without appropriate safeguards, wells that penetrate both groundwater and oil or saline water formations inevitably contaminate the groundwater. Initial attempts to prevent this contamination consisted of sealing off the groundwater formations with some form of protective barrier to prevent the oil flowing up the well from entering or mixing with the natural groundwater reservoir. This method, which is still in use today, initially involved using hollow trees to seal off the groundwater formations; now, however, large metal pipe casings, set in place with cement, are used. Regulations currently govern the kinds of casing and cement that can be used in these practices; however, the hazards of insufficient knowledge persist. For example, the long-term stability of this way of protecting groundwater is unknown. The protective barrier may fail due to corrosion of the casing by certain fluids flowing up the well, or because of dissolution of the cement by these fluids. The effects of groundwater bacteria, traffic vibrations, and changing groundwater chemistry are likewise unassessed. Further, there is no guarantee that wells drilled in compliance with existing regulations will not expose a need for research in additional areas: on the west coast of North America, a major disaster recently occurred because a well's location was based on a poor understanding of the area's subsurface geology. Because the well was drilled in a channel accessing the ocean, not only was the area's groundwater completely contaminated, but widespread coastal contamination also occurred, prompting international concern over oil exploration and initiating further attempts to refine regulations.
200406_1-RC_1_2
[ "They are usually located in areas whose subsurface geology is poorly understood.", "They are generally less common in coastal regions.", "They are usually located in geologic formations similar to those in which gas is found.", "They are often contaminated by fresh or saline water.", "They are generally found at greater depths than groundwater formations." ]
4
The passage states which one of the following about underground oil reservoirs?
The accumulation of scientific knowledge regarding the environmental impact of oil well drilling in North America has tended to lag behind the actual drilling of oil wells. Most attempts to regulate the industry have relied on hindsight: the need for regulation becomes apparent only after undesirable events occur. The problems associated with oil wells' potential contamination of groundwater—fresh water within the earth that supplies wells and springs—provide a case in point. When commercial drilling for oil began in North America in the mid-nineteenth century, regulations reflected the industry's concern for the purity of the wells' oil. In 1893, for example, regulations were enacted specifying well construction requirements to protect oil and gas reserves from contamination by fresh water. Thousands of wells were drilled in such a way as to protect the oil, but no thought was given to the possibility that the groundwater itself might need protection until many drinking-water wells near the oil well sites began to produce unpotable, oil-contaminated water. The reason for this contamination was that groundwater is usually found in porous and permeable geologic formations near the earth's surface, whereas petroleum and unpotable saline water reservoirs are generally found in similar formations but at greater depths. Drilling a well creates a conduit connecting all the formations that it has penetrated. Consequently, without appropriate safeguards, wells that penetrate both groundwater and oil or saline water formations inevitably contaminate the groundwater. Initial attempts to prevent this contamination consisted of sealing off the groundwater formations with some form of protective barrier to prevent the oil flowing up the well from entering or mixing with the natural groundwater reservoir. This method, which is still in use today, initially involved using hollow trees to seal off the groundwater formations; now, however, large metal pipe casings, set in place with cement, are used. Regulations currently govern the kinds of casing and cement that can be used in these practices; however, the hazards of insufficient knowledge persist. For example, the long-term stability of this way of protecting groundwater is unknown. The protective barrier may fail due to corrosion of the casing by certain fluids flowing up the well, or because of dissolution of the cement by these fluids. The effects of groundwater bacteria, traffic vibrations, and changing groundwater chemistry are likewise unassessed. Further, there is no guarantee that wells drilled in compliance with existing regulations will not expose a need for research in additional areas: on the west coast of North America, a major disaster recently occurred because a well's location was based on a poor understanding of the area's subsurface geology. Because the well was drilled in a channel accessing the ocean, not only was the area's groundwater completely contaminated, but widespread coastal contamination also occurred, prompting international concern over oil exploration and initiating further attempts to refine regulations.
200406_1-RC_1_3
[ "cynical that future regulatory reform will occur without international concern", "satisfied that existing regulations are adequate to prevent unwarranted tradeoffs between resource collection and environmental protection", "concerned that regulatory reform will not progress until significant undesirable events occur", "optimistic that current scientific research will spur regulatory reform", "confident that regulations will eventually be based on accurate geologic understandings" ]
2
The author's attitude regarding oil well drilling regulations can most accurately be described as
The accumulation of scientific knowledge regarding the environmental impact of oil well drilling in North America has tended to lag behind the actual drilling of oil wells. Most attempts to regulate the industry have relied on hindsight: the need for regulation becomes apparent only after undesirable events occur. The problems associated with oil wells' potential contamination of groundwater—fresh water within the earth that supplies wells and springs—provide a case in point. When commercial drilling for oil began in North America in the mid-nineteenth century, regulations reflected the industry's concern for the purity of the wells' oil. In 1893, for example, regulations were enacted specifying well construction requirements to protect oil and gas reserves from contamination by fresh water. Thousands of wells were drilled in such a way as to protect the oil, but no thought was given to the possibility that the groundwater itself might need protection until many drinking-water wells near the oil well sites began to produce unpotable, oil-contaminated water. The reason for this contamination was that groundwater is usually found in porous and permeable geologic formations near the earth's surface, whereas petroleum and unpotable saline water reservoirs are generally found in similar formations but at greater depths. Drilling a well creates a conduit connecting all the formations that it has penetrated. Consequently, without appropriate safeguards, wells that penetrate both groundwater and oil or saline water formations inevitably contaminate the groundwater. Initial attempts to prevent this contamination consisted of sealing off the groundwater formations with some form of protective barrier to prevent the oil flowing up the well from entering or mixing with the natural groundwater reservoir. This method, which is still in use today, initially involved using hollow trees to seal off the groundwater formations; now, however, large metal pipe casings, set in place with cement, are used. Regulations currently govern the kinds of casing and cement that can be used in these practices; however, the hazards of insufficient knowledge persist. For example, the long-term stability of this way of protecting groundwater is unknown. The protective barrier may fail due to corrosion of the casing by certain fluids flowing up the well, or because of dissolution of the cement by these fluids. The effects of groundwater bacteria, traffic vibrations, and changing groundwater chemistry are likewise unassessed. Further, there is no guarantee that wells drilled in compliance with existing regulations will not expose a need for research in additional areas: on the west coast of North America, a major disaster recently occurred because a well's location was based on a poor understanding of the area's subsurface geology. Because the well was drilled in a channel accessing the ocean, not only was the area's groundwater completely contaminated, but widespread coastal contamination also occurred, prompting international concern over oil exploration and initiating further attempts to refine regulations.
200406_1-RC_1_4
[ "a lack of understanding regarding the dangers to human health posed by groundwater contamination", "a failure to comprehend the possible consequences of drilling in complex geologic systems", "poorly tested methods for verifying the safety of newly developed technologies", "an inadequate appreciation for the difficulties of enacting and enforcing environmental regulations", "a rudimentary understanding of the materials used in manufacturing metal pipe casings" ]
1
The author uses the phrase "the hazards of insufficient knowledge" (line 44) primarily in order to refer to the risks resulting from
The accumulation of scientific knowledge regarding the environmental impact of oil well drilling in North America has tended to lag behind the actual drilling of oil wells. Most attempts to regulate the industry have relied on hindsight: the need for regulation becomes apparent only after undesirable events occur. The problems associated with oil wells' potential contamination of groundwater—fresh water within the earth that supplies wells and springs—provide a case in point. When commercial drilling for oil began in North America in the mid-nineteenth century, regulations reflected the industry's concern for the purity of the wells' oil. In 1893, for example, regulations were enacted specifying well construction requirements to protect oil and gas reserves from contamination by fresh water. Thousands of wells were drilled in such a way as to protect the oil, but no thought was given to the possibility that the groundwater itself might need protection until many drinking-water wells near the oil well sites began to produce unpotable, oil-contaminated water. The reason for this contamination was that groundwater is usually found in porous and permeable geologic formations near the earth's surface, whereas petroleum and unpotable saline water reservoirs are generally found in similar formations but at greater depths. Drilling a well creates a conduit connecting all the formations that it has penetrated. Consequently, without appropriate safeguards, wells that penetrate both groundwater and oil or saline water formations inevitably contaminate the groundwater. Initial attempts to prevent this contamination consisted of sealing off the groundwater formations with some form of protective barrier to prevent the oil flowing up the well from entering or mixing with the natural groundwater reservoir. This method, which is still in use today, initially involved using hollow trees to seal off the groundwater formations; now, however, large metal pipe casings, set in place with cement, are used. Regulations currently govern the kinds of casing and cement that can be used in these practices; however, the hazards of insufficient knowledge persist. For example, the long-term stability of this way of protecting groundwater is unknown. The protective barrier may fail due to corrosion of the casing by certain fluids flowing up the well, or because of dissolution of the cement by these fluids. The effects of groundwater bacteria, traffic vibrations, and changing groundwater chemistry are likewise unassessed. Further, there is no guarantee that wells drilled in compliance with existing regulations will not expose a need for research in additional areas: on the west coast of North America, a major disaster recently occurred because a well's location was based on a poor understanding of the area's subsurface geology. Because the well was drilled in a channel accessing the ocean, not only was the area's groundwater completely contaminated, but widespread coastal contamination also occurred, prompting international concern over oil exploration and initiating further attempts to refine regulations.
200406_1-RC_1_5
[ "Groundwater contamination is unlikely because the well did not strike oil and hence will not be put in operation.", "Danger to human health due to groundwater contamination is unlikely because large cities generally have more than one source of drinking water.", "Groundwater contamination is likely unless the well is plugged and abandoned.", "Groundwater contamination is unlikely because the groundwater formation's large size will safely dilute any saline water that enters it.", "The risk of groundwater contamination can be reduced if casing is set properly and monitored routinely for breakdown." ]
4
Based on the information in the passage, if a prospective oil well drilled near a large city encounters a large groundwater formation and a small saline water formation, but no oil, which one of the following statements is most likely to be true?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_6
[ "The lives of Puerto Rican Americans are affected in various ways by code-switching.", "It is not always possible to explain why code-switching occurs in conversations among Puerto Rican Americans.", "Rhetorical factors can explain more instances of code-switching among Puerto Rican Americans than can situational factors.", "Studies of bilingual communities of Puerto Rican Americans have caused linguists to revise many of their beliefs about code-switching.", "Most code-switching among Puerto Rican Americans can be explained by subtle situational and rhetorical factors." ]
4
Which one of the following most accurately expresses the main point of the passage?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_7
[ "report evidence supporting the conclusion that the family's code-switching had a rhetorical basis", "show that reasons for code-switching differ from one community to another", "supply evidence that seems to conflict with the researchers' conclusions about why the family engaged in code-switching", "refute the argument that situational factors explain most code-switching", "explain how it could be that the family members failed to notice their use of Spanish" ]
0
In lines 56–59, the author mentions the family members' explanation of their use of Spanish primarily in order to
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_8
[ "Where do the students involved in the study think that a parent and child are likely to be when they are talking about how to be a good son or daughter?", "What language or mix of languages do the students involved in the study think that a parent and child would be likely to use when they are talking at home about how to be a good son or daughter?", "What language or mix of languages do the students involved in the study think that a priest and a parishioner would be likely to use if they were conversing on a beach?", "What topic do the students involved in the study think that a parent and child would be most likely to discuss when they are speaking Spanish?", "What topic do the students involved in the study think that a priest and parishioner would be likely to discuss on a beach?" ]
3
Which one of the following questions is NOT characterized by the passage as a question to which linguists sought answers in their code-switching studies involving high school students?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_9
[ "consider a general explanation for the phenomenon of code-switching that is different from the one discussed in the preceding paragraphs", "resolve an apparent conflict between two explanations for code-switching that were discussed in the preceding paragraphs", "show that there are instances of code-switching that are not explained by the factors discussed in the previous paragraph", "report some of the patterns of code-switching observed among a family of Puerto Rican Americans in another community", "show that some instances of code-switching are unconscious" ]
2
The primary function of the third paragraph of the passage is to
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_10
[ "A speaker who does not know certain words in the primary language of a conversation occasionally has recourse to familiar words in another language.", "A person translating a text from one language into another leaves certain words in the original language because the author of the text invented those words.", "For the purpose of improved selling strategies, a businessperson who primarily uses one language sometimes conducts business in a second language that is preferred by some people in the community.", "A speaker who primarily uses one language switches to another language because it sounds more expressive.", "A speaker who primarily uses one language occasionally switches to another language in order to maintain fluency in the secondary language." ]
3
Based on the passage, which one of the following is best explained as rhetorically determined code-switching?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_11
[ "Research revealing that speakers are sometimes unaware of code-switching casts doubt on the results of a prior study involving high school students.", "Relevant research conducted prior to the linguists' work with high school students would lead one to expect different answers from those the students actually gave.", "Research conducted prior to the study of a family of Puerto Rican Americans was thought by most researchers to explain code-switching in all except the most unusual or nonstandard contexts.", "Research suggests that people engaged in code-switching are usually unaware of which situational factors might influence their choice of language or languages.", "Research suggests that the family of Puerto Rican Americans does not use code-switching in conversations held at home except for occasional rhetorical effect." ]
4
It can be inferred from the passage that the author would most likely agree with which one of the following statements?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_12
[ "Linguists have observed that bilingual high school students do not agree among themselves as to what mix of languages they would use in the presence of incongruent situational factors.", "Code-switching sometimes occurs in conversations whose situational factors would be expected to involve the use of a single language.", "Bilingual people often switch smoothly between two languages even when there is no change in the situational context in which the conversation takes place.", "Puerto Rican Americans sometimes use Spanish only sparingly and for rhetorical effect in the presence of situational factors that would lead one to expect Spanish to be the primary language.", "Speakers who engage in code-switching are often unaware of the situational factors influencing their choices of which language or mix of languages to speak." ]
1
Which one of the following does the passage offer as evidence that code-switching cannot be entirely explained by situational factors?
In many bilingual communities of Puerto Rican Americans living in the mainland United States, people use both English and Spanish in a single conversation, alternating between them smoothly and frequently even within the same sentence. This practice—called code-switching—is common in bilingual populations. While there are some cases that cannot currently be explained, in the vast majority of cases subtle factors, either situational or rhetorical, explain the use of code-switching. Linguists say that most code-switching among Puerto Rican Americans is sensitive to the social contexts, which researchers refer to as domains, in which conversations take place. The main conversational factors influencing the occurrence of code-switching are setting, participants, and topic. When these go together naturally they are said to be congruent; a set of three such congruent factors constitutes a conversational situation. Linguists studying the choice between Spanish and English among a group of Puerto Rican American high school students classified their conversational situations into five domains: family, friendship, religion, education, and employment. To test the effects of these domains on code-switching, researchers developed a list of hypothetical situations made up of two of the three congruent factors, or of two incongruent factors, approximating an interaction in one of the five domains. The researchers asked the students to determine the third factor and to choose which mix of language—on a continuum from all English to all Spanish—they would use in that situation. When given two congruent factors, the students easily supplied the third congruent factor and strongly agreed among themselves about which mix they would use. For instance, for the factors of participants "parent and child" and the topic "how to be a good son or daughter," the congruent setting chosen was "home" and the language mix chosen was Spanish only. In contrast, incongruent factors such as the participants "priest and parishioner" and the setting "beach" yielded less agreement on the third factor of topic and on language choice. But situational factors do not account for all code-switching; it occurs even when the domain would lead one not to expect it. In these cases, one language tends to be the primary one, while the other is used only sparingly to achieve certain rhetorical effects. Often the switches are so subtle that the speakers themselves are not aware of them. This was the case with a study of a family of Puerto Rican Americans in another community. Family members believed they used only English at home, but their taped conversations occasionally contained some Spanish, with no change in situational factors. When asked what the presence of Spanish signified, they commented that it was used to express certain attitudes such as intimacy or humor more emphatically.
200406_1-RC_2_13
[ "In a previous twelve-month study involving the same family in their home, their conversations were entirely in English except when situational factors changed significantly.", "In a subsequent twelve-month study involving the same family, a particular set of situational factors occurred repeatedly without any accompanying instances of code-switching.", "In a subsequent twelve-month study involving the same family, it was noted that intimacy and humor were occasionally expressed through the use of English expressions.", "When asked about the significance of their use of Spanish, the family members replied in English rather than Spanish.", "Prior to their discussions with the researchers, the family members did not describe their occasional use of Spanish as serving to emphasize humor or intimacy." ]
0
Which one of the following, if true, would most cast doubt on the author's interpretation of the study involving the family discussed in the third paragraph?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_14
[ "scholarly neutrality", "grudging respect", "thoughtless disregard", "cautious ambivalence", "reasoned dismissal" ]
4
Which one of the following most accurately describes the author's attitude toward formalism as expressed in the passage?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_15
[ "a translator who translates a poem from Spanish to English word for word so that its original meaning is not distorted", "a music critic who insists that early music can be truly appreciated only when it is played on original instruments of the period", "a reviewer who finds in the works of a novelist certain unifying themes that reveal the novelist's personal concerns and preoccupations", "a folk artist who uses conventional cultural symbols and motifs as a way of conveying commonly understood meanings", "a director who sets a play by Shakespeare in nineteenth-century Japan to give a new perspective on the work" ]
4
Which one of the following persons displays an approach that most strongly suggests sympathy with the principles of reader-response theory?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_16
[ "Any literary theory should be seen ultimately as limiting, since contradictory interpretations of texts are inevitable.", "A purpose of a literary theory is to broaden and enhance the understanding that can be gained from a work.", "A literary theory should provide valid and strictly objective methods for interpreting texts.", "The purpose of a literary theory is to make clear the intended meaning of the author of a work.", "Since no literary theory has a monopoly on meaning, a reader should avoid using theories to interpret literature." ]
1
With which one of the following statements would the author of the passage be most likely to agree?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_17
[ "a wide range of perspectives on works of literature", "contemporary ideology as a basis for criticism", "encoding the meaning of a literary work in signs and symbols", "finding the meaning of a work in its text alone", "belief that an author's intended meaning in a work is discoverable" ]
0
The passage states that reader-response theory legitimizes which one of the following?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_18
[ "to reinforce the notion that reader-based interpretations of texts invariably raise more questions than they can answer", "to confirm the longevity of interpretations similar to reader-based interpretations of texts", "to point out a fundamental flaw that the author believes makes reader-response theory untenable", "to concede a minor weakness in reader-response theory that the author believes is outweighed by its benefits", "to suggest that reader-response theory can occasionally encourage fragmented views of a work" ]
3
Which one of the following most accurately describes the author's purpose in referring to literature of the past as being "unfairly burdened" (line 51) in some cases?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_19
[ "Reader-response theory is reflected in interpretations that have been given throughout history and that bring additional insight to literary study.", "Reader-response theory legitimizes conflicting interpretations that collectively diminish the understanding of a work.", "Reader-response theory fails to provide a unified view of the meaning of a literary work.", "Reader-response theory claims that a text cannot have meaning without a reader.", "Reader-response theory recognizes meanings in a text that were never intended by the author." ]
1
Which one of the following, if true, most weakens the author's argument concerning reader-response theory?
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_20
[ "stress the intricacy and complexity of good literature", "grant that a reader must be guided by the text to some degree", "imply that no theory alone can fully explain a work of literature", "illustrate how a literary work differs from a map", "show that an inflexible standard of interpretation provides constant accuracy" ]
1
The author's reference to "various signs and symbols" (line 33) functions primarily to
Reader-response theory, a type of literary theory that arose in reaction to formalist literary criticism, has endeavored to shift the emphasis in the interpretation of literature from the text itself to the contributions of readers to the meaning of a text. According to literary critics who endorse reader-response theory, the literary text alone renders no meaning; it acquires meaning only when encountered by individual readers, who always bring varying presuppositions and ways of reading to bear on the text, giving rise to the possibility—even probability— of varying interpretations. This brand of criticism has met opposition from the formalists, who study the text alone and argue that reader-response theory can encourage and even validate fragmented views of a work, rather than the unified view acquired by examining only the content of the text. However, since no theory has a monopoly on divining meaning from a text, the formalists' view appears unnecessarily narrow. The proponents of formalism argue that their approach is firmly grounded in rational, objective principles, while reader-response theory lacks standards and verges on absolute subjectivity. After all, these proponents argue, no author can create a work that is packed with countless meanings. The meaning of a work of literature, the formalists would argue, may be obscure and somewhat arcane; yet, however hidden it may be, the author's intended meaning is legible within the work, and it is the critic's responsibility to search closely for this meaning. However, while a literary work is indeed encoded in various signs and symbols that must be translated for the work to be understood and appreciated, it is not a map. Any complicated literary work will invariably raise more questions than it answers. What is needed is a method that enables the critic to discern and make use of the rich stock of meanings created in encounters between texts and readers. Emphasizing the varied presuppositions and perceptions that readers bring to the interpretations of a text can uncover hitherto unnoticed dimensions of the text. In fact, many important works have received varying interpretations throughout their existence, suggesting that reader-based interpretations similar to those described by reader-response theory had been operating long before the theory's principles were articulated. And while in some cases critics' textual interpretations based on reader-response theory have unfairly burdened literature of the past with contemporary ideologies, legitimate additional insights and understandings continue to emerge years after an ostensibly definitive interpretation of a major work has been articulated. By regarding a reader's personal interpretation of literary works as not only valid but also useful in understanding the works, reader-response theory legitimizes a wide range of perspectives on these works and thereby reinforces the notion of them as fluid and lively forms of discourse that can continue to support new interpretations long after their original composition.
200406_1-RC_3_21
[ "Formalists believe that responsible critics who focus on the text alone will tend to find the same or similar meanings in a literary work.", "Critical approaches similar to those described by formalism had been used to interpret texts long before the theory was articulated as such.", "Formalists would not find any meaning in a text whose author did not intend it to have any one particular meaning.", "A literary work from the past can rarely be read properly using reader-response theory when the subtleties of the work's social-historical context are not available.", "Formalism is much older and has more adherents than reader-response theory." ]
0
Which one of the following can most reasonably be inferred from the information in the passage?
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_22
[ "While institutions expect to prosper from increased research support and royalties from patentable products resulting from faculty inventions, if they do not establish clear-cut policies governing ownership of these inventions, they run the risk of losing faculty to research corporations or commercial consulting contracts.", "The fourfold classification of institutional policies governing exploitation of faculty inventions is sufficient to categorize the variety of steps institutions are taking to ensure that faculty inventors will not be lured away by commercial firms or research corporations.", "To prevent the loss of faculty to commercial firms or research corporations, institutions will have to abandon their insistence on retaining maximum ownership of and profit from faculty inventions and adopt the common-law presumption that faculty alone own their inventions.", "While the policies of most institutions governing exploitation of faculty inventions seek to maximize university ownership of and profit from these inventions, another policy offers faculty greater flexibility to pursue their commercial interests by regarding faculty as the owners of their intellectual products.", "Most institutional policies governing exploitation of faculty inventions are indefensible because they run counter to common-law notions of ownership and copyright, but they usually go unchallenged because few faculty members are aware of what other options might be available to them." ]
3
Which one of the following most accurately summarizes the main point of the passage?
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_23
[ "The policies are in keeping with the institution's financial interests.", "The policies are antithetical to the mission of a university.", "The policies do not have a significant impact on the research of faculty.", "The policies are invariably harmful to the motivation of faculty attempting to pursue research projects.", "The policies are illegal and possibly immoral." ]
0
Which one of the following most accurately characterizes the author's view regarding the institutional intellectual property policies of most universities?
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_24
[ "an institution in which faculty own the right to some inventions they create outside the institution", "an institution in which faculty own all their inventions, regardless of any circumstances, but grant the institution the right to collect a portion of their royalties", "an institution in which all inventions developed by faculty with institutional resources become the property of the institution", "an institution in which all faculty inventions related to public health become the property of the institution", "an institution in which some faculty inventions created with institutional resources remain the property of the faculty member" ]
1
Which one of the following institutions would NOT be covered by the fourfold classification proposed by Chew?
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_25
[ "commercial firm", "supramaximalist university", "maximalist university", "resource-provider university", "faculty-oriented university" ]
3
The passage suggests that the type of institution in which employees are likely to have the most uncertainty about who owns their intellectual products is the
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_26
[ "vagueness on the issue of what constitutes university as opposed to nonuniversity resources", "insistence on reaping substantial financial benefit from faculty inventions while still providing faculty with unlimited flexibility", "inversion of the usual practices regarding exploitation of faculty inventions in order to give faculty greater flexibility", "insistence on ownership of faculty inventions developed outside the institution in order to maximize financial benefit to the university", "reliance on the extent of use of institutional resources as the sole criterion in determining ownership of faculty inventions" ]
4
According to the passage, what distinguishes a resource-provider institution from the other types of institutions identified by Chew is its
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_27
[ "explain why institutions may wish to develop intellectual property policies that are responsive to certain faculty needs", "draw a contrast between the worlds of academia and business that will be explored in detail later in the passage", "defend the intellectual property rights of faculty inventors against encroachment by the institutions that employ them", "describe the previous research that led Chew to study institutional policies governing ownership of faculty inventions", "demonstrate that some faculty inventors would be better off working for commercial firms" ]
0
The author of the passage most likely quotes one study of entrepreneurship in lines 16–19 primarily in order to
Faculty researchers, particularly in scientific, engineering, and medical programs, often produce scientific discoveries and invent products or processes that have potential commercial value. Many institutions have invested heavily in the administrative infrastructure to develop and exploit these discoveries, and they expect to prosper both by an increased level of research support and by the royalties from licensing those discoveries having patentable commercial applications. However, although faculty themselves are unlikely to become entrepreneurs, an increasing number of highly valued researchers will be sought and sponsored by research corporations or have consulting contracts with commercial firms. One study of such entrepreneurship concluded that "if universities do not provide the flexibility needed to venture into business, faculty will be tempted to go to those institutions that are responsive to their commercialized desires." There is therefore a need to consider the different intellectual property policies that govern the commercial exploitation of faculty inventions in order to determine which would provide the appropriate level of flexibility. In a recent study of faculty rights, Patricia Chew has suggested a fourfold classification of institutional policies. A supramaximalist institution stakes out the broadest claim possible, asserting ownership not only of all intellectual property produced by faculty in the course of their employment while using university resources, but also for any inventions or patent rights from faculty activities, even those involving research sponsored by nonuniversity funders. A maximalist institution allows faculty ownership of inventions that do not arise either "in the course of the faculty's employment [or] from the faculty's use of university resources." This approach, although not as all-encompassing as that of the supramaximalist university, can affect virtually all of a faculty member's intellectual production. A resource-provider institution asserts a claim to faculty's intellectual product in those cases where "significant use" of university time and facilities is employed. Of course, what constitutes significant use of resources is a matter of institutional judgment. As Chew notes, in these policies "faculty rights, including the sharing of royalties, are the result of university benevolence and generosity. [However, this] presumption is contrary to the common law, which provides that faculty own their inventions." Others have pointed to this anomaly and, indeed, to the uncertain legal and historical basis upon which the ownership of intellectual property rests. Although these issues remain unsettled, and though universities may be overreaching due to faculty's limited knowledge of their rights, most major institutions behave in the ways that maximize university ownership and profit participation. But there is a fourth way, one that seems to be free from these particular issues. Faculty-oriented institutions assume that researchers own their own intellectual products and the rights to exploit them commercially, except in the development of public health inventions or if there is previously specified "substantial university involvement." At these institutions industry practice is effectively reversed, with the university benefiting in far fewer circumstances.
200406_1-RC_4_28
[ "Supramaximalist institutions run the greatest risk of losing faculty to jobs in institutions more responsive to the inventor's financial interests.", "A faculty-oriented institution will make no claim of ownership to a faculty invention that is unrelated to public health and created without university involvement.", "Faculty at maximalist institutions rarely produce inventions outside the institution without using the institution's resources.", "There is little practical difference between the policies of supramaximalist and maximalist institutions.", "The degree of ownership claimed by a resource-provider institution of the work of its faculty will not vary from case to case." ]
4
The passage suggests each of the following EXCEPT:
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_1
[ "In the short term, prepaid legal plans such as the CAW Legal Services Plan appear to be beneficial to both lawyers and clients, but in the long run lawyers will profit at the expense of clients.", "The CAW Legal Services Plan and other similar plans represent a controversial, but probably effective, way of bringing down the cost of legal services to clients and increasing lawyers' clientele.", "The use of prepaid legal plans such as that of the CAW should be rejected in favor of a more equitable means of making legal services more generally affordable.", "In spite of widespread consumer support for legal plans such as that offered by the CAW, lawyers generally criticize such plans, mainly because of their potential financial impact on the legal profession.", "Although they have so far attracted many subscribers, it is doubtful whether the CAW Legal Services Plan and other similar prepaid plans will benefit lawyers and clients in the long run." ]
4
Which one of the following most accurately expresses the main point of the passage?
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_2
[ "compare and contrast legal plans with the traditional way of paying for legal services", "explain the growing popularity of legal plans", "trace the effect of legal plans on prices of legal services", "caution that increased use of legal plans is potentially harmful to the legal profession and to clients", "advocate reforms to legal plans as presently constituted" ]
3
The primary purpose of the passage is to
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_3
[ "results that are largely at odds with those predicted by lawyers who criticize the plans", "a lowering of the rates such plans charge their members", "forced participation of lawyers who can benefit little from association with the plans", "an eventual increase in profits for lawyers from client usage of the plans", "a reduction in the time lawyers devote to complex cases" ]
4
Which one of the following does the author predict will be a consequence of increased use of legal plans?
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_4
[ "a description of a recently implemented set of procedures and policies; a summary of the results of that implementation; a proposal of refinements in those policies and procedures", "an evaluation of a recent phenomenon; a comparison of that phenomenon with related past phenomena; an expression of the author's approval of that phenomenon", "a presentation of a proposal; a discussion of the prospects for implementing that proposal; a recommendation by the author that the proposal be rejected", "a description of an innovation; a report of reasoning against and reasoning favoring that innovation; argumentation by the author concerning that innovation", "an explanation of a recent occurrence; an evaluation of the practical value of that occurrence; a presentation of further data regarding that occurrence" ]
3
Which one of the following sequences most accurately and completely corresponds to the presentation of the material in the passage?
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_5
[ "Lawyers can expect to gain expertise in a wide variety of legal services by availing themselves of the access to diverse clientele that plan participation affords.", "Experienced cooperating lawyers are likely to enjoy the higher profits of long-term, complex cases, for which new lawyers are not suited.", "Lower rates of profit will be offset by a higher volume of clients and new business through word-of-mouth recommendations.", "Lower fees tend to attract clients away from established, nonparticipating law firms.", "With all legal fees moving downward to match the plans' schedules, the profession will respond to market forces." ]
2
The passage most strongly suggests that, according to proponents of prepaid legal plans, cooperating lawyers benefit from taking clients at lower fees in which one of the following ways?
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_6
[ "They can enjoy benefits beyond the use of the services of the plan's staff lawyers.", "So far, they generally believe the quality of services they receive from the plan's staff lawyers is as high as that provided by other lawyers.", "Most of them consult lawyers only for relatively simple and routine matters.", "They must pay a fee above the cost of membership for the services of an outside lawyer.", "They do not include only active and retired autoworkers and their families." ]
0
According to the passage, which one of the following is true of CAW Legal Services Plan members?
The Canadian Auto Workers' (CAW) Legal Services Plan, designed to give active and retired autoworkers and their families access to totally prepaid or partially reimbursed legal services, has been in operation since late 1985. Plan members have the option of using either the plan's staff lawyers, whose services are fully covered by the cost of membership in the plan, or an outside lawyer. Outside lawyers, in turn, can either sign up with the plan as a "cooperating lawyer" and accept the CAW's fee schedule as payment in full, or they can charge a higher fee and collect the balance from the client. Autoworkers appear to have embraced the notion of prepaid legal services: 45 percent of eligible union members were enrolled in the plan by 1988. Moreover, the idea of prepaid legal services has been spreading in Canada. A department store is even offering a plan to holders of its credit card. While many plan members seem to be happy to get reduced-cost legal help, many lawyers are concerned about the plan's effect on their profession, especially its impact on prices for legal services. Some point out that even though most lawyers have not joined the plan as cooperating lawyers, legal fees in the cities in which the CAW plan operates have been depressed, in some cases to an unprofitable level. The directors of the plan, however, claim that both clients and lawyers benefit from their arrangement. For while the clients get ready access to reduced-price services, lawyers get professional contact with people who would not otherwise be using legal services, which helps generate even more business for their firms. Experience shows, the directors say, that if people are referred to a firm and receive excellent service, the firm will get three to four other referrals who are not plan subscribers and who would therefore pay the firm's standard rate. But it is unlikely that increased use of such plans will result in long-term client satisfaction or in a substantial increase in profits for law firms. Since lawyers with established reputations and client bases can benefit little, if at all, from participation, the plans function largely as marketing devices for lawyers who have yet to establish themselves. While many of these lawyers are no doubt very able and conscientious, they will tend to have less expertise and to provide less satisfaction to clients. At the same time, the downward pressure on fees will mean that the full-fee referrals that proponents say will come through plan participation may not make up for a firm's investment in providing services at low plan rates. And since lowered fees provide little incentive for lawyers to devote more than minimal effort to cases, a "volume discount" approach toward the practice of law will mean less time devoted to complex cases and a general lowering of quality for clients.
200410_1-RC_1_7
[ "It points to an aspect of legal plans that the author believes will be detrimental to the quality of legal services.", "It is identified by the author as one of the primary ways in which plan administrators believe themselves to be contributing materially to the legal profession in return for lawyers' participation.", "It identifies what the author considers to be one of the few unequivocal benefits that legal plans can provide.", "It is reported as part of several arguments that the author attributes to established lawyers who oppose plan participation.", "It describes one of the chief burdens of lawyers who have yet to establish themselves and offers an explanation of their advocacy of legal plans." ]
0
Which one of the following most accurately represents the primary function of the author's mention of marketing devices (line 43)?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_8
[ "The history of settlement along the Pacific Coast of the U.S., as understood by most historiographers, is confirmed by evidence reconstructed from the actions of Asian settlers.", "Asian settlers on the Pacific Coast of the U.S. left a record of their experiences that traditional historiographers believed to be irrelevant.", "To understand Asian settlers' impact on the history of the Pacific Coast of the U.S., historiographers have had to recognize the value of nontraditional kinds of historiographic evidence.", "Spurred by new findings regarding Asian settlement on the Pacific Coast of the U.S., historiographers have begun to debate the methodological foundations of historiography.", "By examining only written information, historiography as it is traditionally practiced has produced inaccurate historical accounts." ]
2
Which one of the following most accurately states the main point of the passage?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_9
[ "to suggest that Chinese settlers followed typical settlement patterns in this region during the nineteenth century", "to argue that little written evidence of Chinese settlers' practices survives", "to provide examples illustrating the unique view Asian settlers had of the land", "to demonstrate that the history of settlement in the region has become a point of contention among historiographers", "to claim that the historical record provided by the actions of Asian settlers is inconsistent with history as derived from traditional sources" ]
2
Which one of the following most accurately describes the author's primary purpose in discussing Chinese settlers in the third paragraph?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_10
[ "They were written both before and after Asian settlers arrived in the area.", "They include accounts by Native Americans in the area.", "They are primarily concerned with potential agricultural uses of the land.", "They focus primarily on the presence of water sources in the region.", "They are accounts left by European American explorers." ]
4
The passage states that the primary traditional historiographic sources of information about the history of the Pacific Coast of the U.S. have which one of the following characteristics?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_11
[ "Examining the actions not only of Asian settlers but of other cultural groups of the Pacific Coast of the U.S. is necessary to a full understanding of the impact of settlement on the landscape there.", "The significance of certain actions to the writing of history may be recognized by one group of historiographers but not another.", "Recognizing the actions of Asian settlers adds to but does not complete the writing of the history of the Pacific Coast of the U.S.", "By recognizing as evidence the actions of people, historiographers expand the definition of what a source is.", "The expanded definition of a source will probably not be relevant to studies of regions that have no significant immigration of non-Europeans." ]
4
The author would most likely disagree with which one of the following statements?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_12
[ "new ideas for utilizing local plants", "a new view of the land", "specialized agricultural skills", "knowledge of agribusiness practices", "knowledge of irrigation systems" ]
3
According to the passage, each of the following was an aspect of Chinese settlers' initial interactions with the landscape of the Pacific Coast of the U.S. EXCEPT:
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_13
[ "Most Chinese settlers came to the Pacific Coast of the U.S. because the climate was similar to that with which they were familiar.", "Chinese agricultural methods in the nineteenth century included knowledge of swamp reclamation.", "Settlers of European descent used wild mustard seed as a spice.", "Because of the abundance of written sources available, it is not worthwhile to examine the actions of European settlers.", "What written records were left by Asian settlers were neglected and consequently lost to scholarly research." ]
1
Which one of the following can most reasonably be inferred from the passage?
In the field of historiography—the writing of history based on a critical examination of authentic primary information sources—one area that has recently attracted attention focuses on the responses of explorers and settlers to new landscapes in order to provide insights into the transformations the landscape itself has undergone as a result of settlement. In this endeavor historiographers examining the history of the Pacific Coast of the United States have traditionally depended on the records left by European American explorers of the nineteenth century who, as commissioned agents of the U.S. government, were instructed to report thoroughly their findings in writing. But in furthering this investigation some historiographers have recently recognized the need to expand their definition of what a source is. They maintain that the sources traditionally accepted as documenting the history of the Pacific Coast have too often omitted the response of Asian settlers to this territory. In part this is due to the dearth of written records left by Asian settlers; in contrast to the commissioned agents, most of the people who first came to western North America from Asia during this same period did not focus on developing a self-conscious written record of their involvement with the landscape. But because a full study of a culture's historical relationship to its land cannot confine itself to a narrow record of experience, these historiographers have begun to recognize the value of other kinds of evidence, such as the actions of Asian settlers. As a case in point, the role of Chinese settlers in expanding agriculture throughout the Pacific Coast territory is integral to the history of the region. Without access to the better land, Chinese settlers looked for agricultural potential in this generally arid region where other settlers did not. For example, where settlers of European descent looked at willows and saw only useless, untillable swamp, Chinese settlers saw fresh water, fertile soil, and the potential for bringing water to more arid areas via irrigation. Where other settlers who looked at certain weeds, such as wild mustard, generally saw a nuisance, Chinese settlers saw abundant raw material for valuable spices from a plant naturally suited to the local soil and climate. Given their role in the labor force shaping this territory in the nineteenth century, the Chinese settlers offered more than just a new view of the land. Their vision was reinforced by specialized skills involving swamp reclamation and irrigation systems, which helped lay the foundation for the now well-known and prosperous agribusiness of the region. That 80 percent of the area's cropland is now irrigated and that the region is currently the top producer of many specialty crops cannot be fully understood by historiographers without attention to the input of Chinese settlers as reconstructed from their interactions with that landscape.
200410_1-RC_2_14
[ "Market research of agribusinesses owned by descendants of Chinese settlers shows that the market for the region's specialty crops has grown substantially faster than the market for any other crops in the last decade.", "Nineteenth-century surveying records indicate that the lands now cultivated by specialty crop businesses owned by descendants of Chinese settlers were formerly swamp lands.", "Research by university agricultural science departments proves that the formerly arid lands now cultivated by large agribusinesses contain extremely fertile soil when they are sufficiently irrigated.", "A technological history tracing the development of irrigation systems in the region reveals that their efficiency has increased steadily since the nineteenth century.", "Weather records compiled over the previous century demonstrate that the weather patterns in the region are well-suited to growing certain specialty crops as long as they are irrigated." ]
1
Which one of the following, if true, would most help to strengthen the author's main claim in the last sentence of the passage?
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_15
[ "Levi-Montalcini's discovery of neurotrophic factors as a result of research carried out in the 1940s was a major contribution to our understanding of the role of naturally occurring chemicals, especially NGF, in the development of chick embryos.", "Levi-Montalcini's discovery of NGF, a neurotrophic factor that stimulates the development of some types of nerve tissue and whose presence or absence in surrounding cells helps determine whether particular nerve cells will survive, was a pivotal development in biochemistry.", "NGF, which is necessary for the survival and proper functioning of nerve cells, was discovered by Levi-Montalcini in a series of experiments using the technique of tissue culture, which she devised in the 1940s.", "Partly as a result of Levi-Montalcini's research, it has been found that NGF and other neurotrophic factors are produced only by tissues to which nerves are already connected and that the presence of these factors is necessary for the health and proper functioning of nervous systems.", "NGF, a chemical that was discovered by Levi-Montalcini, directs the growth of nerve cells toward the cells with which they must connect and ensures the survival of those nerve cells throughout the life of the organism except when the organism produces anti-NGF antibodies." ]
1
Which one of the following most accurately expresses the main point of the passage?
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_16
[ "paved the way for more specific knowledge of the processes governing the development of the nervous system", "demonstrated that a then new laboratory technique could yield important and unanticipated experimental results", "confirmed the hypothesis that many of a developing organism's immature nerve cells are normally programmed to die", "indicated that this substance stimulates observable biochemical reactions in the tissues of different species", "identified a specific substance, produced by mouse tumors, that can be used to stimulate nerve cell growth" ]
0
Based on the passage, the author would be most likely to believe that Levi-Montalcini's discovery of NGF is noteworthy primarily because it
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_17
[ "indicate that conclusions referred to in the second paragraph, though essentially correct, require further verification", "indicate that conclusions referred to in the second paragraph have been undermined by subsequently obtained evidence", "indicate ways in which conclusions referred to in the second paragraph have been further corroborated and refined", "describe subsequent discoveries of substances analogous to the substance discussed in the second paragraph", "indicate that experimental procedures discussed in the second paragraph have been supplanted by more precise techniques described in the third paragraph" ]
2
The primary function of the third paragraph of the passage in relation to the second paragraph is to
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_18
[ "Nerve cells in excess of those that are needed by the organism in which they develop eventually produce anti-NGF antibodies to suppress the effects of NGF.", "Nerve cells that grow in the absence of NGF are less numerous than, but qualitatively identical to, those that grow in the presence of NGF.", "Few of the nerve cells that connect with target cells toward which NGF directs them are needed by the organism in which they develop.", "Some of the nerve cells that grow in the presence of NGF are eventually converted to other types of living tissue by neurotrophic factors.", "Some of the nerve cells that grow in an embryo do not connect with any particular target cells." ]
4
Information in the passage most strongly supports which one of the following?
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_19
[ "A certain kind of mouse tumor produces a chemical that stimulates the growth of nerve cells.", "Developing embryos initially grow many more nerve cells than they will eventually require.", "In addition to NGF, there are several other important neurotrophic factors regulating cell survival and function.", "Certain organs contain NGF in concentrations much higher than in the surrounding tissue.", "Certain nerve cells are supplied with NGF by the muscle cells to which they are connected." ]
0
The passage describes a specific experiment that tested which one of the following hypotheses?
The survival of nerve cells, as well as their performance of some specialized functions, is regulated by chemicals known as neurotrophic factors, which are produced in the bodies of animals, including humans. Rita Levi-Montalcini's discovery in the 1950s of the first of these agents, a hormonelike substance now known as NGF, was a crucial development in the history of biochemistry, which led to Levi-Montalcini sharing the Nobel Prize for medicine in 1986. In the mid-1940s, Levi-Montalcini had begun by hypothesizing that many of the immature nerve cells produced in the development of an organism are normally programmed to die. In order to confirm this theory, she conducted research that in 1949 found that, when embryos are in the process of forming their nervous systems, they produce many more nerve cells than are finally required, the number that survives eventually adjusting itself to the volume of tissue to be supplied with nerves. A further phase of the experimentation, which led to Levi-Montalcini's identification of the substance that controls this process, began with her observation that the development of nerves in chick embryos could be stimulated by implanting a certain variety of mouse tumor in the embryos. She theorized that a chemical produced by the tumors was responsible for the observed nerve growth. To investigate this hypothesis, she used the then new technique of tissue culture, by which specific types of body cells can be made to grow outside the organism from which they are derived. Within twenty-four hours, her tissue cultures of chick embryo extracts developed dense halos of nerve tissue near the places in the culture where she had added the mouse tumor. Further research identified a specific substance contributed by the mouse tumors that was responsible for the effects Levi-Montalcini had observed: a protein that she named "nerve growth factor" (NGF). NGF was the first of many cell-growth factors to be found in the bodies of animals. Through Levi-Montalcini's work and other subsequent research, it has been determined that this substance is present in many tissues and biological fluids, and that it is especially concentrated in some organs. In developing organisms, nerve cells apparently receive this growth factor locally from the cells of muscles or other organs to which they will form connections for transmission of nerve impulses, and sometimes from supporting cells intermingled with the nerve tissue. NGF seems to play two roles, serving initially to direct the developing nerve processes toward the correct, specific "target" cells with which they must connect, and later being necessary for the continued survival of those nerve cells. During some periods of their development, the types of nerve cells that are affected by NGF—primarily cells outside the brain and spinal cord—die if the factor is not present or if they encounter anti-NGF antibodies.
200410_1-RC_3_20
[ "Some of the effects that the author describes as occurring in Levi-Montalcini's culture of chick embryo extract were due to neurotrophic factors other than NGF.", "Although NGF was the first neurotrophic factor to be identified, some other such factors are now more thoroughly understood.", "In her research in the 1940s and 1950s, Levi-Montalcini identified other neurotrophic factors in addition to NGF.", "Some neurotrophic factors other than NGF perform functions that are not specifically identified in the passage.", "The effects of NGF that Levi-Montalcini noted in her chick embryo experiment are also caused by other neurotrophic factors not discussed in the passage." ]
3
Which one of the following is most strongly supported by the information in the passage?
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_21
[ "The Modern Movement declined because its proponents were overly ideological and did not take into account the facts of building construction.", "Rationality was the theoretical basis for the development of the Modern Movement in architecture.", "Changes in architectural design introduced by the Modern Movement inspired the development of modern construction methods.", "The theoretical bases of the Modern Movement in architecture originated in changes in building construction methods.", "Proponents of the Modern Movement in architecture rejected earlier architectural styles because such styles were not functional." ]
0
Which one of the following most accurately summarizes the main idea of the passage?
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_22
[ "Clothing produced on an assembly line is less precisely tailored than clothing produced by a single garment maker.", "Handwoven fabric is more beautiful than fabric produced by machine.", "Lenses ground on a machine are less useful than lenses ground by hand.", "Form letters produced by a word processor elicit fewer responses than letters typed individually on a typewriter.", "Furniture produced in a factory is less fashionable than handcrafted furniture." ]
0
Which one of the following is most similar to the relationship described in the passage between the new methods of the building industry and pre-twentieth-century construction?
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_23
[ "forbearing", "defensive", "unimpressed", "exasperated", "indifferent" ]
2
With respect to the proponents of the Modern Movement, the author of the passage can best be described as
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_24
[ "The repudiation of the ideal by some of these architects undermined its validity.", "The ideal was rarely achieved because of its lack of popular appeal.", "The ideal was unrealistic because most builders were unwilling to attempt it.", "The ideal originated in the work of Otto Wagner and Frank Lloyd Wright.", "The ideal arose from aesthetic rather than practical concerns." ]
4
It can be inferred that the author of the passage believes which one of the following about Modern Movement architects' ideal of exposing structural materials?
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_25
[ "\"functional spirit\" (lines 4–5)", "\"tended\" (line 24)", "\"innovators\" (line 26)", "\"conveniently\" (line 30)", "\"degree of inaccuracy\" (line 47)" ]
3
Which one of the following, in its context in the passage, most clearly reveals the attitude of the author toward the proponents of the Modern Movement?
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_26
[ "innovative architects whose work was not immediately appreciated by the public", "architects whom proponents of the Modern Movement claimed represented the movement", "architects whose work helped to popularize the Modern Movement", "architects who generally attempted to interpret the needs of their clients, rather than dictating to them", "architects whose early work seemed to architects of the Modern Movement to be at odds with the principles of Modernism" ]
1
The author of the passage mentions Otto Wagner and the young Frank Lloyd Wright (lines 27–28) primarily as examples of
The proponents of the Modern Movement in architecture considered that, compared with the historical styles that it replaced, Modernist architecture more accurately reflected the functional spirit of twentieth-century technology and was better suited to the newest building methods. It is ironic, then, that the Movement fostered an ideology of design that proved to be at odds with the way buildings were really built. The tenacious adherence of Modernist architects and critics to this ideology was in part responsible for the Movement's decline. Originating in the 1920s as a marginal, almost bohemian art movement, the Modern Movement was never very popular with the public, but this very lack of popular support produced in Modernist architects a high-minded sense of mission—not content merely to interpret the needs of the client, these architects now sought to persuade, to educate, and, if necessary, to dictate. By 1945 the tenets of the Movement had come to dominate mainstream architecture, and by the early 1950s, to dominate architectural criticism—architects whose work seemed not to advance the evolution of the Modern Movement tended to be dismissed by proponents of Modernism. On the other hand, when architects were identified as innovators—as was the case with Otto Wagner, or the young Frank Lloyd Wright—attention was drawn to only those features of their work that were "Modern" ; other aspects were conveniently ignored. The decline of the Modern Movement later in the twentieth century occurred partly as a result of Modernist architects' ignorance of building methods, and partly because Modernist architects were reluctant to admit that their concerns were chiefly aesthetic. Moreover, the building industry was evolving in a direction Modernists had not anticipated: it was more specialized and the process of construction was much more fragmented than in the past. Up until the twentieth century, construction had been carried out by a relatively small number of tradespeople, but as the building industry evolved, buildings came to be built by many specialized subcontractors working independently. The architect's design not only had to accommodate a sequence of independent operations, but now had to reflect the allowable degree of inaccuracy of the different trades. However, one of the chief construction ideals of the Modern Movement was to "honestly" expose structural materials such as steel and concrete. To do this and still produce a visually acceptable interior called for an unrealistically high level of craftmanship. Exposure of a building's internal structural elements, if it could be achieved at all, could only be accomplished at considerable cost— hence the well-founded reputation of Modern architecture as prohibitively expensive. As Postmodern architects recognized, the need to expose structural elements imposed unnecessary limitations on building design. The unwillingness of architects of the Modern Movement to abandon their ideals contributed to the decline of interest in the Modern Movement.
200410_1-RC_4_27
[ "analyzing the failure of a movement", "predicting the future course of a movement", "correcting a misunderstanding about a movement", "anticipating possible criticism of a movement", "contrasting incompatible viewpoints about a movement" ]
0
The author of the passage is primarily concerned with
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_1
[ "The most useful response to a natural disaster is one in which relief agencies allow victims to dictate the type of aid they receive, which will most likely result in the allocation of long-term rather than immediate aid.", "The quantity of aid given after a natural disaster reflects the desires of donors more than the needs of recipients, and in some cases great quantities of aid are destructive rather than helpful.", "Aid that focuses on long-term needs is difficult to organize because, by its very definition, it requires that relief agencies focus on constructing an adequate dialogue among recipients, providers, and donors.", "Disaster relief efforts have been marked by inefficiencies that attest to the need for donors and relief agencies to communicate with affected communities concerning how best to meet not only their short-term but also their long-term needs.", "Though the years after a disaster are crucial for communities affected by disasters, the days and weeks immediately after a disaster are what capture the attention of donors, thus forcing relief agencies into the role of mediators between the two extremes." ]
3
Which one of the following most accurately expresses the main point of the passage?
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_2
[ "After a flood, local officials reject three more expensive proposals before finally accepting a contractor's plan to control a local river with a dam.", "Following a plan developed several years ago by a relief agency in consultation with donors and community members, the relief agency provides temporary shelter immediately after a flood and later helps rebuild houses destroyed by the flood.", "Immediately after a flood, several different relief agencies, each acting independently, send large shipments of goods to the affected community along with teams of highly motivated but untrained volunteers to coordinate the distribution of these goods.", "At the request of its donors, a private relief agency delays providing any assistance to victims of a flood until after the agency conducts a thorough study of the types of aid most likely to help the affected community in the long run.", "After a flood, government officials persuade local companies to increase their corporate giving levels and to direct more aid to the surrounding community." ]
1
Which one of the following examples best illustrates the type of disaster response recommended by the experts mentioned in the third paragraph?
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_3
[ "Disaster relief plans are appropriate only for disaster-prone communities.", "When communities affected by disasters have articulated their long-term needs, donors typically have been responsive to those needs.", "Donors would likely provide more disaster relief aid if they had confidence that it would be used more effectively than aid currently is.", "It is not the amount of aid but rather the way this aid is managed that is the source of current problems in disaster relief.", "Few communities affected by disasters experience a crucial need for short-term aid." ]
3
The author of the passage would be most likely to agree with which one of the following statements?
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_4
[ "point to an influential group of people who have resisted changes to traditional disaster response efforts", "demonstrate that the needs of donors and aid recipients contrast profoundly on the issue of disaster response", "show that implementing an effective disaster relief program requires a new approach on the part of donors as well as relief agencies", "illustrate that relief agencies and donors share similar views on the goals of disaster response but disagree on the proper response methods", "concede that the reformation of disaster relief programs, while necessary, is unlikely to take place because of the disagreements among donors" ]
2
The author discusses donors in the final paragraph primarily in order to
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_5
[ "a development that would benefit affected communities as well as aid providers who have a shared interest in relief efforts that are effective and well managed", "a change that would help communities meet their future needs more effectively but would inevitably result in a detrimental reduction of short-term aid like food and medicine", "an approach that would enable aid recipients to meet their long-term needs but which would not address the mismanagement that hampers short-term relief efforts", "a movement that, while well intentioned, will likely be undermined by the unwillingness of donors to accept new methods of delivering aid", "the beginning of a trend in which aid recipients play a major role after a disaster and donors play a minor role, reversing the structure of traditional aid programs" ]
0
It can be inferred from the passage that the author would be most likely to view a shift toward a more long-term perspective in disaster relief efforts as which one of the following?
A number of natural disasters in recent years— such as earthquakes, major storms, and floods—that have affected large populations of people have forced relief agencies, communities, and entire nations to reevaluate the ways in which they respond in the aftermaths of such disasters. They believe that traditional ways of dealing with disasters have proved ineffective on several occasions and, in some cases, have been destructive rather than helpful to the communities hit by these sudden and unexpected crises. Traditionally, relief has been based on the premise that aid in postdisaster situations is most effective if given in the immediate aftermath of an event. A high priority also has been placed on the quantity of aid materials, programs, and personnel, in the belief that the negative impact of a disaster can be counteracted by a large and rapid infusion of aid. Critics claim that such an approach often creates a new set of difficulties for already hard-hit communities. Teams of uninvited experts and personnel—all of whom need food and shelter—as well as uncoordinated shipments of goods and the establishment of programs inappropriate to local needs can quickly lead to a secondary "disaster" as already strained local infrastructures break down under the pressure of this large influx of resources. In some instances, tons of food have disappeared into local markets for resale, and, with inadequate accounting procedures, billions of dollars in aid money have gone unaccounted for. To develop a more effective approach, experts recommend shifting the focus to the long term. A response that produces lasting benefit, these experts claim, requires that community members define the form and method of aid that are most appropriate to their needs. Grassroots dialogue designed to facilitate preparedness should be encouraged in disaster-prone communities long before the onset of a crisis, so that in a disaster's immediate aftermath, relief agencies can rely on members of affected communities to take the lead. The practical effect of this approach is that aid takes the form of a response to the stated desires of those affected rather than an immediate, though less informed, action on their behalf. Though this proposal appears sound, its success depends on how an important constituency, namely donors, will respond. Historically, donors—individuals, corporations, foundations, and governmental bodies—have been most likely to respond only in the immediate aftermath of a crisis. However, communities affected by disasters typically have several long-term needs such as the rebuilding of houses and roads, and thus the months and years after a disaster are also crucial. Donors that incorporate dialogue with members of affected communities into their relief plans could foster strategies that more efficiently utilize immediate aid as well as provide for the difficulties facing communities in the years after a disaster.
200412_2-RC_1_6
[ "Although inefficiencies have long been present in international disaster relief programs, they have been aggravated in recent years by increased demands on relief agencies' limited resources.", "Local communities had expressed little interest in taking responsibility for their own preparedness prior to the most recent years, thus leaving donors and relief agencies unaware of potential problems.", "Numerous relief efforts in the years prior to the most recent provided such vast quantities of aid that most needs were met despite evidence of inefficiency and mismanagement, and few recipient communities questioned traditional disaster response methods.", "Members of communities affected by disasters have long argued that they should set the agenda for relief efforts, but relief agencies have only recently come to recognize the validity of their arguments.", "A number of wasteful relief efforts in the most recent years provided dramatic illustrations of aid programs that were implemented by donors and agencies with little accountability to populations affected by disasters." ]
4
Which one of the following inferences about natural disasters and relief efforts is most strongly supported by the passage?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_7
[ "The Hippocratic oath ought to be reevaluated carefully, with special regard to the role of the physician, to make certain that its fundamental moral rules still apply today.", "Despite recent criticisms of the Hippocratic oath, some version of it that will continue to assure patients of physicians' professionalism and beneficent treatment ought to be retained.", "Codes of ethics developed for one society at a particular point in history may lose some specific application in later societies but can retain a useful fundamental moral purpose.", "Even the criticisms of the Hippocratic oath based on contemporary values and newly relevant medical issues cannot negate patients' need for assurance.", "Modern ideas about ethics, especially medical ethics, obviate the need for and appropriateness of a single code of medical ethics like the Hippocratic oath." ]
1
Which one of the following most accurately states the main point of the passage?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_8
[ "A general principle is described, criticisms of the principle are made, and modifications of the principle are made in light of these criticisms.", "A set of criticisms is put forward, and possible replies to those criticisms are considered and dismissed.", "The history of a certain code of conduct is discussed, criticisms of the code are mentioned and partially endorsed, and the code is modified as a response.", "A general principle is formulated, a partial defense of that principle is presented, and criticisms of the principle are discussed and rejected.", "The tradition surrounding a certain code of conduct is discussed, criticisms of that code are mentioned, and a general defense of the code is presented." ]
4
Which one of the following most accurately describes the organization of the material presented in the passage?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_9
[ "creation of a community of physicians from all eras, nations, and cultures", "constant improvement and advancement of medical science", "provision of medical care to all individuals regardless of ability to pay", "physician action for the benefit of patients", "observance of established moral rules even in the face of challenging societal forces" ]
3
The passage cites which one of the following as a value at the heart of the Hippocratic oath that should present no difficulty to most reformers?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_10
[ "affirm society's continuing need for a code embodying certain principles", "chastise critics within the medical community who support reinterpretation of a code embodying certain principles", "argue that historical doubts about the origin of a certain code are irrelevant to its interpretation", "outline the pros and cons of revising a code embodying certain principles", "propose a revision of a code embodying certain principles that will increase the code's applicability to modern times" ]
0
The author's primary purpose in the passage is to
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_11
[ "The fact that such reinterpretations are so easy, however, suggests that our rejection of the historical issue was perhaps premature.", "Yet, where such piecemeal reinterpretation is not possible, revisions to even the core value of the oath may be necessary.", "It is thus simply a failure of the imagination, and not any changes in the medical profession or society in general, that has motivated critics of the Hippocratic oath.", "Because of this tradition of reinterpretation of the Hippocratic oath, therefore, modern ideas about medical ethics must be much more flexible than they have been in the past.", "Despite many new challenges facing the medical profession, therefore, there is no real need for wholesale revision of the Hippocratic oath." ]
4
Based on information in the passage, it can be inferred that which one of the following sentences could most logically be added to the passage as a concluding sentence?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_12
[ "The oath encourages authoritarianism on the part of physicians.", "The version of the oath in use today is not identical to the oath formulated in ancient Greece.", "The oath fails to address modern medical dilemmas that could not have been foreseen in ancient Greece.", "The oath's absolutism is incompatible with contemporary views of morality.", "The oath's emphasis on the individual patient is often not compatible with a market-driven medical industry." ]
1
Each of the following is mentioned in the passage as a criticism of the Hippocratic oath EXCEPT:
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_13
[ "enthusiastic support", "bemused dismissal", "reasoned disagreement", "strict neutrality", "guarded agreement" ]
2
Which one of the following can most accurately be used to describe the author's attitude toward critics of the Hippocratic oath?
The moral precepts embodied in the Hippocratic oath, which physicians standardly affirm upon beginning medical practice, have long been considered the immutable bedrock of medical ethics, binding physicians in a moral community that reaches across temporal, cultural, and national barriers. Until very recently the promises expressed in that oath—for example to act primarily for the benefit and not the harm of patients and to conform to various standards of professional conduct including the preservation of patients' confidences—even seemed impervious to the powerful scientific and societal forces challenging it. Critics argue that the oath is outdated; its fixed moral rules, they say, are incompatible with more flexible modern ideas about ethics. It also encourages doctors to adopt an authoritarian stance that depreciates the privacy and autonomy of the patient. Furthermore, its emphasis on the individual patient without regard for the wider social context frustrates the physician's emerging role as gatekeeper in managed care plans and impedes competitive market forces, which, some critics believe, should determine the quality, price, and distribution of health care as they do those of other commodities. The oath is also faulted for its omissions: its failure to mention such vital contemporary issues as human experimentation and the relationships of physicians to other health professionals. Some respected opponents even cite historical doubts about the oath's origin and authorship, presenting evidence that it was formulated by a small group of reformist physicians in ancient Greece and that for centuries it was not uniformly accepted by medical practitioners. This historical issue may be dismissed at the outset as irrelevant to the oath's current appropriateness. Regardless of the specific origin of its text—which, admittedly, is at best uncertain—those in each generation who critically appraise its content and judge it to express valid principles of medical ethics become, in a more meaningful sense, its authors. More importantly, even the more substantive, morally based arguments concerning contemporary values and newly relevant issues cannot negate the patients' need for assurance that physicians will pursue appropriate goals in treatment in accordance with generally acceptable standards of professionalism. To fulfill that need, the core value of beneficence—which does not actually conflict with most reformers' purposes—should be retained, with adaptations at the oath's periphery by some combination of revision, supplementation, and modern interpretation. In fact, there is already a tradition of peripheral reinterpretation of traditional wording; for example, the oath's vaguely and archaically worded proscription against "cutting for the stone" may once have served to forbid surgery, but with today's safer and more effective surgical techniques it is understood to function as a promise to practice within the confines of one's expertise, which remains a necessary safeguard for patients' safety and well-being.
200412_2-RC_2_14
[ "\"The Ancients versus the Moderns: Conflicting Ideas About Medical Ethics\"", "\"Hypocritical Oafs: Why 'Managed Care' Proponents are Seeking to Repeal an Ancient Code\"", "\"Genetic Fallacy in the Age of Gene-Splicing: Why the Origins of the Hippocratic Oath Don't Matter\"", "\"The Dead Hand of Hippocrates: Breaking the Hold of Ancient Ideas on Modern Medicine\"", "\"Prescription for the Hippocratic Oath: Facelift or Major Surgery?\"" ]
4
Which one of the following would be most suitable as a title for this passage if it were to appear as an editorial piece?
A lichen consists of a fungus living in symbiosis (i.e., a mutually beneficial relationship) with an alga. Although most branches of the complex evolutionary family tree of fungi have been well established, the evolutionary origins of lichen-forming fungi have been a mystery. But a new DNA study has revealed the relationship of lichen-forming fungi to several previously known branches of the fungus family tree. The study reveals that, far from being oddities, lichen-forming fungi are close relatives of such common fungi as brewer's yeast, morel mushrooms, and the fungus that causes Dutch elm disease. This accounts for the visible similarity of certain lichens to more recognizable fungi such as mushrooms. In general, fungi present complications for the researcher. Fungi are usually parasitic or symbiotic, and researchers are often unsure whether they are examining fungal DNA or that of the associated organism. But lichen-forming fungi are especially difficult to study. They have few distinguishing characteristics of shape or structure, and they are unusually difficult to isolate from their partner algae, with which they have a particularly delicate symbiosis. In some cases the alga is wedged between layers of fungal tissue; in others, the fungus grows through the alga's cell walls in order to take nourishment, and the tissues of the two organisms are entirely enmeshed and inseparable. As a result, lichen-forming fungi have long been difficult to classify definitively within the fungus family. By default they were thus considered a separate grouping of fungi with an unknown evolutionary origin. But, using new analytical tools that allow them to isolate the DNA of fungi in parasitic or symbiotic relationships, researchers were able to establish the DNA sequence in a certain gene found in 75 species of fungi, including 10 species of lichen-forming fungi. Based on these analyses, the researchers found 5 branches on the fungus family tree to which varieties of lichen-forming fungi belong. Furthermore, the researchers stress that it is likely that as more types of lichen-forming fungi are analyzed, they will be found to belong to still more branches of the fungus family tree. One implication of the new research is that it provides evidence to help overturn the long-standing evolutionary assumption that parasitic interactions inevitably evolve over time to a greater benignity and eventually to symbiosis so that the parasites will not destroy their hosts. The addition of lichen-forming fungi to positions along branches of the fungus family tree indicates that this assumption does not hold for fungi. Fungi both harmful and benign can now be found both early and late in fungus evolutionary history. Given the new layout of the fungus family tree resulting from the lichen study, it appears that fungi can evolve toward mutualism and then just as easily turn back again toward parasitism.
200412_2-RC_3_15
[ "New research suggests that fungi are not only parasitic but also symbiotic organisms.", "New research has revealed that lichen-forming fungi constitute a distinct species of fungus.", "New research into the evolutionary origins of lichen-forming fungi reveals them to be closely related to various species of algae.", "New research has isolated the DNA of lichen-forming fungi and uncovered their relationship to the fungus family tree.", "New research into the fungal component of lichens explains the visible similarities between lichens and fungi by means of their common evolutionary origins." ]
3
Which one of the following most accurately states the main point of the passage?
A lichen consists of a fungus living in symbiosis (i.e., a mutually beneficial relationship) with an alga. Although most branches of the complex evolutionary family tree of fungi have been well established, the evolutionary origins of lichen-forming fungi have been a mystery. But a new DNA study has revealed the relationship of lichen-forming fungi to several previously known branches of the fungus family tree. The study reveals that, far from being oddities, lichen-forming fungi are close relatives of such common fungi as brewer's yeast, morel mushrooms, and the fungus that causes Dutch elm disease. This accounts for the visible similarity of certain lichens to more recognizable fungi such as mushrooms. In general, fungi present complications for the researcher. Fungi are usually parasitic or symbiotic, and researchers are often unsure whether they are examining fungal DNA or that of the associated organism. But lichen-forming fungi are especially difficult to study. They have few distinguishing characteristics of shape or structure, and they are unusually difficult to isolate from their partner algae, with which they have a particularly delicate symbiosis. In some cases the alga is wedged between layers of fungal tissue; in others, the fungus grows through the alga's cell walls in order to take nourishment, and the tissues of the two organisms are entirely enmeshed and inseparable. As a result, lichen-forming fungi have long been difficult to classify definitively within the fungus family. By default they were thus considered a separate grouping of fungi with an unknown evolutionary origin. But, using new analytical tools that allow them to isolate the DNA of fungi in parasitic or symbiotic relationships, researchers were able to establish the DNA sequence in a certain gene found in 75 species of fungi, including 10 species of lichen-forming fungi. Based on these analyses, the researchers found 5 branches on the fungus family tree to which varieties of lichen-forming fungi belong. Furthermore, the researchers stress that it is likely that as more types of lichen-forming fungi are analyzed, they will be found to belong to still more branches of the fungus family tree. One implication of the new research is that it provides evidence to help overturn the long-standing evolutionary assumption that parasitic interactions inevitably evolve over time to a greater benignity and eventually to symbiosis so that the parasites will not destroy their hosts. The addition of lichen-forming fungi to positions along branches of the fungus family tree indicates that this assumption does not hold for fungi. Fungi both harmful and benign can now be found both early and late in fungus evolutionary history. Given the new layout of the fungus family tree resulting from the lichen study, it appears that fungi can evolve toward mutualism and then just as easily turn back again toward parasitism.
200412_2-RC_3_16
[ "to suggest that new research overturns the assumption that lichen-forming fungi are primarily symbiotic, rather than parasitic, organisms", "to show that findings based on new research regarding fungus classification have implications that affect a long-standing assumption of evolutionary science", "to explain the fundamental purposes of fungus classification in order to position this classification within the broader field of evolutionary science", "to demonstrate that a fundamental assumption of evolutionary science is verified by new research regarding fungus classification", "to explain how symbiotic relationships can evolve into purely parasitic ones" ]
1
Which one of the following most accurately describes the author's purpose in the last paragraph of the passage?