context
stringclasses
269 values
id_string
stringlengths
15
16
answers
sequencelengths
5
5
label
int64
0
4
question
stringlengths
34
417
A proficiency in understanding, applying, and even formulating statutes—the actual texts of laws enacted by legislative bodies—is a vital aspect of the practice of law, but statutory law is often given too little attention by law schools. Much of legal education, with its focus on judicial decisions and analysis of cases, can give a law student the impression that the practice of law consists mainly in analyzing past cases to determine their relevance to a client's situation and arriving at a speculative interpretation of the law relevant to the client's legal problem. Lawyers discover fairly soon, however, that much of their practice does not depend on the kind of painstaking analysis of cases that is performed in law school. For example, a lawyer representing the owner of a business can often find an explicit answer as to what the client should do about a certain tax-related issue by consulting the relevant statutes. In such a case the facts are clear and the statutes' relation to them transparent, so that the client's question can be answered by direct reference to the wording of the statutes. But statutes' meanings and their applicability to relevant situations are not always so obvious, and that is one reason that the ability to interpret them accurately is an essential skill for law students to learn. Another skill that teaching statutory law would improve is synthesis. Law professors work hard at developing their students' ability to analyze individual cases, but in so doing they favor the ability to apply the law in particular cases over the ability to understand the interrelations among laws. In contrast, the study of all the statutes of a legal system in a certain small area of the law would enable the student to see how these laws form a coherent whole. Students would then be able to apply this ability to synthesize in other areas of statutory law that they encounter in their study or practice. This is especially important because most students intend to specialize in a chosen area, or areas, of the law. One possible argument against including training in statutory law as a standard part of law school curricula is that many statutes vary from region to region within a nation, so that the mastery of a set of statutes would usually not be generally applicable. There is some truth to this objection; law schools that currently provide some training in statutes generally intend it as a preparation for practice in their particular region, but for schools that are nationally oriented, this could seem to be an inappropriate investment of time and resources. But while the knowledge of a particular region's statutory law is not generally transferable to other regions, the skills acquired in mastering a particular set of statutes are, making the study of statutory law an important undertaking even for law schools with a national orientation.
200912_4-RC_2_13
[ "What are some ways in which synthetic skills are strengthened or encouraged through the analysis of cases and judicial decisions?", "In which areas of legal practice is a proficiency in case analysis more valuable than a proficiency in statutory law?", "What skills are common to the study of both statutory law and judicial decisions?", "What are some objections that have been raised against including the study of statutes in regionally oriented law schools?", "What is the primary focus of the curriculum currently offered in most law schools?" ]
4
Which one of the following questions can be most clearly and directly answered by reference to information in the passage?
A proficiency in understanding, applying, and even formulating statutes—the actual texts of laws enacted by legislative bodies—is a vital aspect of the practice of law, but statutory law is often given too little attention by law schools. Much of legal education, with its focus on judicial decisions and analysis of cases, can give a law student the impression that the practice of law consists mainly in analyzing past cases to determine their relevance to a client's situation and arriving at a speculative interpretation of the law relevant to the client's legal problem. Lawyers discover fairly soon, however, that much of their practice does not depend on the kind of painstaking analysis of cases that is performed in law school. For example, a lawyer representing the owner of a business can often find an explicit answer as to what the client should do about a certain tax-related issue by consulting the relevant statutes. In such a case the facts are clear and the statutes' relation to them transparent, so that the client's question can be answered by direct reference to the wording of the statutes. But statutes' meanings and their applicability to relevant situations are not always so obvious, and that is one reason that the ability to interpret them accurately is an essential skill for law students to learn. Another skill that teaching statutory law would improve is synthesis. Law professors work hard at developing their students' ability to analyze individual cases, but in so doing they favor the ability to apply the law in particular cases over the ability to understand the interrelations among laws. In contrast, the study of all the statutes of a legal system in a certain small area of the law would enable the student to see how these laws form a coherent whole. Students would then be able to apply this ability to synthesize in other areas of statutory law that they encounter in their study or practice. This is especially important because most students intend to specialize in a chosen area, or areas, of the law. One possible argument against including training in statutory law as a standard part of law school curricula is that many statutes vary from region to region within a nation, so that the mastery of a set of statutes would usually not be generally applicable. There is some truth to this objection; law schools that currently provide some training in statutes generally intend it as a preparation for practice in their particular region, but for schools that are nationally oriented, this could seem to be an inappropriate investment of time and resources. But while the knowledge of a particular region's statutory law is not generally transferable to other regions, the skills acquired in mastering a particular set of statutes are, making the study of statutory law an important undertaking even for law schools with a national orientation.
200912_4-RC_2_14
[ "While nationally oriented law schools have been deficient in statutory law training, most regionally oriented law schools have been equally deficient in the teaching of case law.", "Training in statutory law would help lawyers resolve legal questions for which the answers are not immediately apparent in the relevant statutes.", "Lawyers who are trained in statutory law typically also develop a higher level of efficiency in manipulating details of past cases as compared with lawyers who are not trained in this way.", "Courses in statutory law are less effective if they focus specifically on the statutes of a particular region or in a particular area of the law.", "Lawyers who do not specialize probably have little need for training in statutory law beyond a brief introduction to the subject." ]
1
The information in the passage suggests that the author would most likely agree with which one of the following statements regarding training in statutory law?
A proficiency in understanding, applying, and even formulating statutes—the actual texts of laws enacted by legislative bodies—is a vital aspect of the practice of law, but statutory law is often given too little attention by law schools. Much of legal education, with its focus on judicial decisions and analysis of cases, can give a law student the impression that the practice of law consists mainly in analyzing past cases to determine their relevance to a client's situation and arriving at a speculative interpretation of the law relevant to the client's legal problem. Lawyers discover fairly soon, however, that much of their practice does not depend on the kind of painstaking analysis of cases that is performed in law school. For example, a lawyer representing the owner of a business can often find an explicit answer as to what the client should do about a certain tax-related issue by consulting the relevant statutes. In such a case the facts are clear and the statutes' relation to them transparent, so that the client's question can be answered by direct reference to the wording of the statutes. But statutes' meanings and their applicability to relevant situations are not always so obvious, and that is one reason that the ability to interpret them accurately is an essential skill for law students to learn. Another skill that teaching statutory law would improve is synthesis. Law professors work hard at developing their students' ability to analyze individual cases, but in so doing they favor the ability to apply the law in particular cases over the ability to understand the interrelations among laws. In contrast, the study of all the statutes of a legal system in a certain small area of the law would enable the student to see how these laws form a coherent whole. Students would then be able to apply this ability to synthesize in other areas of statutory law that they encounter in their study or practice. This is especially important because most students intend to specialize in a chosen area, or areas, of the law. One possible argument against including training in statutory law as a standard part of law school curricula is that many statutes vary from region to region within a nation, so that the mastery of a set of statutes would usually not be generally applicable. There is some truth to this objection; law schools that currently provide some training in statutes generally intend it as a preparation for practice in their particular region, but for schools that are nationally oriented, this could seem to be an inappropriate investment of time and resources. But while the knowledge of a particular region's statutory law is not generally transferable to other regions, the skills acquired in mastering a particular set of statutes are, making the study of statutory law an important undertaking even for law schools with a national orientation.
200912_4-RC_2_15
[ "skill in locating references to court decisions on an issue involving a particular statute regarding taxation", "an understanding of the ways in which certain underlying purposes are served by an interrelated group of environmental laws", "a knowledge of how maritime statutes are formulated", "familiarity with the specific wordings of a group of laws applying to businesses in a particular region or locality", "an appreciation of the problems of wording involved in drafting antiterrorism laws" ]
0
Each of the following conforms to the kinds of educational results that the author would expect from the course of action proposed in the passage EXCEPT:
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_16
[ "a metal that can be made moderately reflective in any sculptural application and metals that can be made highly reflective but only in certain applications", "a naturally highly reflective metal that was technically suited for sculpture and other highly reflective metals that were not so suited", "metals that can be made highly reflective but lose their reflective properties over time and a metal that does not similarly lose its reflective properties", "a highly reflective sculptural material that, because it is a metal, is long lasting and nonmetallic materials that are highly reflective but impermanent", "a highly reflective metal that was acceptable to both traditional and modern sculptors and highly reflective metals whose use in sculpture was purely experimental" ]
2
In saying that "no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections" (lines 25–27), the author draws a distinction between
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_17
[ "In what way did Noguchi first begin to acquire experience in the cutting and polishing of stone for use in sculpture?", "In the course of his career, did Noguchi ever work in any art form other than sculpture?", "What are some materials other than metal that Noguchi used in his sculptures after ending his association with Brancusi?", "During Noguchi's lifetime, was there any favorable critical response to his creation of a positive-light sculpture?", "Did Noguchi at any time in his career consider creating a transparent or translucent sculpture lighted from within?" ]
3
The passage provides information sufficient to answer which one of the following questions?
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_18
[ "Noguchi's work in Paris contributed significantly to the art of sculpture in that it embodied solutions to problems that other sculptors, including Brancusi, had sought unsuccessfully to overcome.", "Noguchi's scientific approach to designing sculptures and to selecting materials for sculptures is especially remarkable in that he had no formal scientific training.", "Despite the fact that Brancusi was a sculptor and Fuller was not, Fuller played a more pivotal role than did Brancusi in Noguchi's realization of the importance of negative light to the work of previous sculptors.", "Noguchi was more interested in addressing fundamental aesthetic questions than in maintaining a consistent artistic style.", "Noguchi's work is of special interest for what it reveals not only about the value of scientific thinking in the arts but also about the value of aesthetic approaches to scientific inquiry." ]
3
The passage offers the strongest evidence that the author would agree with which one of the following statements?
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_19
[ "A building-materials dealer decides to market a new type of especially durable simulated-wood flooring material after learning that a famous architect has praised the material.", "An expert skier begins experimenting with the use of a new type of material in the soles of ski boots after a shoe manufacturer suggests that that material might be appropriate for that use.", "A producer of shipping containers begins using a new type of strapping material, which a rock-climbing expert soon finds useful as an especially strong and reliable component of safety ropes for climbing.", "A consultant to a book editor suggests the use of a new type of software for typesetting, and after researching the software the editor decides not to adopt it but finds a better alternative as a result of the research.", "A friend of a landscaping expert advises the use of a certain material for the creation of retaining walls and, as a result, the landscaper explores the use of several similar materials." ]
2
In which one of the following is the relation between the two people most analogous to the relation between Ford and Noguchi as indicated by the passage?
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_20
[ "Prior to suggesting the sculptural use of chrome-nickel steel to Noguchi, Fuller himself had made architectural designs that called for the use of this material.", "Noguchi believed that the use of industrial materials to create sculptures would make the sculptures more commercially viable.", "Noguchi's \"invisible\" sculpture appears to have no shape or dimensions of its own, but rather those of surrounding objects.", "If a positive-light sculpture depicting a person in a realistic manner were coated with a metal subject to oxidation, it would eventually cease to be recognizable as a realistic likeness.", "The perception of the shape and dimensions of a negative-light sculpture does not depend on its reflection of objects from the environment around it." ]
4
The passage most strongly supports which one of the following inferences?
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_21
[ "The material that Noguchi used in it had been tentatively investigated by other sculptors but not in direct connection with its reflective properties.", "It was similar to at least some of the sculptures that Noguchi produced prior to 1927 in that it represented a human form.", "Noguchi did not initially think of it as especially innovative or revolutionary and thus was surprised by Fuller's reaction to it.", "It was produced as a personal favor to Fuller and thus was not initially intended to be noticed and commented on by art critics.", "It was unlike the sculptures that Noguchi had helped Brancusi to produce in that the latter's aesthetic effects did not depend on contrasts of light and shadow." ]
1
Which one of the following inferences about the portrait of Fuller does the passage most strongly support?
The Japanese American sculptor Isamu Noguchi (1904–1988) was an artist who intuitively asked—and responded to—deeply original questions. He might well have become a scientist within a standard scientific discipline, but he instead became an artist who repeatedly veered off at wide angles from the well-known courses followed by conventionally talented artists of both the traditional and modern schools. The story behind one particular sculpture typifies this aspect of his creativeness. By his early twenties, Noguchi's sculptures showed such exquisite comprehension of human anatomy and deft conceptual realization that he won a Guggenheim Fellowship for travel in Europe. After arriving in Paris in 1927, Noguchi asked the Romanian-born sculptor Constantin Brancusi if he might become his student. When Brancusi said no, that he never took students, Noguchi asked if he needed a stonecutter. Brancusi did. Noguchi cut and polished stone for Brancusi in his studio, frequently also polishing Brancusi's brass and bronze sculptures. Noguchi, with his scientist's mind, pondered the fact that sculptors through the ages had relied exclusively upon negative light—that is, shadows—for their conceptual communication, precisely because no metals, other than the expensive, nonoxidizing gold, could be relied upon to give off positive-light reflections. Noguchi wanted to create a sculpture that was purely reflective. In 1929, after returning to the United States, he met the architect and philosopher R. Buckminster Fuller, offering to sculpt a portrait of him. When Fuller heard of Noguchi's ideas regarding positive-light sculpture, he suggested using chrome-nickel steel, which Henry Ford, through automotive research and development, had just made commercially available for the first time in history. Here, finally, was a permanently reflective surface, economically available in massive quantities. In sculpting his portrait of Fuller, Noguchi did not think of it as merely a shiny alternate model of traditional, negative-light sculptures. What he saw was that completely reflective surfaces provided a fundamental invisibility of surface like that of utterly still waters, whose presence can be apprehended only when objects—a ship's mast, a tree, or sky—are reflected in them. Seaplane pilots making offshore landings in dead calm cannot tell where the water is and must glide in, waiting for the unpredictable touchdown. Noguchi conceived a similarly invisible sculpture, hidden in and communicating through the reflections of images surrounding it. Then only the distortion of familiar shapes in the surrounding environment could be seen by the viewer. The viewer's awareness of the "invisible" sculpture's presence and dimensional relationships would be derived only secondarily. Even after this stunning discovery, Noguchi remained faithful to his inquisitive nature. At the moment when his explorations had won critical recognition of the genius of his original and fundamental conception, Noguchi proceeded to the next phase of his evolution.
200912_4-RC_3_22
[ "Between 1927 and 1929, Brancusi experimented with the use of highly reflective material for the creation of positive-light sculptures.", "After completing the portrait of Fuller, Noguchi produced only a few positive-light sculptures and in fact changed his style of sculpture repeatedly throughout his career.", "When Noguchi arrived in Paris, he was already well aware of the international acclaim that Brancusi's sculptures were receiving at the time.", "Many of Noguchi's sculptures were, unlike the portrait of Fuller, entirely abstract.", "Despite his inquisitive and scientific approach to the art of sculpture, Noguchi neither thought of himself as a scientist nor had extensive scientific training." ]
0
Which one of the following would, if true, most weaken the author's position in the passage?
In an experiment, two strangers are given the opportunity to share $100, subject to the following constraints: One person—the "proposer" —is to suggest how to divide the money and can make only one such proposal. The other person—the "responder" — must either accept or reject the offer without qualification. Both parties know that if the offer is accepted, the money will be split as agreed, but if the offer is rejected, neither will receive anything. This scenario is called the Ultimatum Game. Researchers have conducted it numerous times with a wide variety of volunteers. Many participants in the role of the proposer seem instinctively to feel that they should offer 50 percent to the responder, because such a division is "fair" and therefore likely to be accepted. Two-thirds of proposers offer responders between 40 and 50 percent. Only 4 in 100 offer less than 20 percent. Offering such a small amount is quite risky; most responders reject such offers. This is a puzzle: Why would anyone reject an offer as too small? Responders who reject an offer receive nothing, so if one assumes—as theoretical economics traditionally has—that people make economic decisions primarily out of rational self-interest, one would expect that an individual would accept any offer. Some theorists explain the insistence on fair divisions in the Ultimatum Game by citing our prehistoric ancestors' need for the support of a strong group. Small groups of hunter-gatherers depended for survival on their members' strengths. It is counterproductive to outcompete rivals within one's group to the point where one can no longer depend on them in contests with other groups. But this hypothesis at best explains why proposers offer large amounts, not why responders reject low offers. A more compelling explanation is that our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are therefore not finely tuned to one-time, strictly anonymous interactions. In real life we expect our friends and neighbors to notice our decisions. If people know that someone is content with a small share, they are likely to make that person low offers. But if someone is known to angrily reject low offers, others have an incentive to make that person high offers. Consequently, evolution should have favored angry responses to low offers; if one regularly receives fair offers when food is divided, one is more likely to survive. Because one-shot interactions were rare during human evolution, our emotions do not discriminate between one-shot and repeated interactions. Therefore, we respond emotionally to low offers in the Ultimatum Game because we instinctively feel the need to reject dismal offers in order to keep our self-esteem. This self-esteem helps us to acquire a reputation that is beneficial in future encounters.
200912_4-RC_4_23
[ "Contrary to a traditional assumption of theoretical economics, the behavior of participants in the Ultimatum Game demonstrates that people do not make economic decisions out of rational self-interest.", "Although the reactions most commonly displayed by participants in the Ultimatum Game appear to conflict with rational self-interest, they probably result from a predisposition that had evolutionary value.", "Because our emotional apparatus has been shaped by millions of years of living in small groups in which it is hard to keep secrets, our emotions are not finely tuned to one-shot, anonymous interactions.", "People respond emotionally to low offers in the Ultimatum Game because they instinctively feel the need to maintain the strength of the social group to which they belong.", "When certain social and ev olutionary factors are taken into account, it can be seen that the behavior of participants in the Ultimatum Game is motivated primarily by the need to outcompete rivals." ]
1
Which one of the following most accurately summarizes the main idea of the passage?
In an experiment, two strangers are given the opportunity to share $100, subject to the following constraints: One person—the "proposer" —is to suggest how to divide the money and can make only one such proposal. The other person—the "responder" — must either accept or reject the offer without qualification. Both parties know that if the offer is accepted, the money will be split as agreed, but if the offer is rejected, neither will receive anything. This scenario is called the Ultimatum Game. Researchers have conducted it numerous times with a wide variety of volunteers. Many participants in the role of the proposer seem instinctively to feel that they should offer 50 percent to the responder, because such a division is "fair" and therefore likely to be accepted. Two-thirds of proposers offer responders between 40 and 50 percent. Only 4 in 100 offer less than 20 percent. Offering such a small amount is quite risky; most responders reject such offers. This is a puzzle: Why would anyone reject an offer as too small? Responders who reject an offer receive nothing, so if one assumes—as theoretical economics traditionally has—that people make economic decisions primarily out of rational self-interest, one would expect that an individual would accept any offer. Some theorists explain the insistence on fair divisions in the Ultimatum Game by citing our prehistoric ancestors' need for the support of a strong group. Small groups of hunter-gatherers depended for survival on their members' strengths. It is counterproductive to outcompete rivals within one's group to the point where one can no longer depend on them in contests with other groups. But this hypothesis at best explains why proposers offer large amounts, not why responders reject low offers. A more compelling explanation is that our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are therefore not finely tuned to one-time, strictly anonymous interactions. In real life we expect our friends and neighbors to notice our decisions. If people know that someone is content with a small share, they are likely to make that person low offers. But if someone is known to angrily reject low offers, others have an incentive to make that person high offers. Consequently, evolution should have favored angry responses to low offers; if one regularly receives fair offers when food is divided, one is more likely to survive. Because one-shot interactions were rare during human evolution, our emotions do not discriminate between one-shot and repeated interactions. Therefore, we respond emotionally to low offers in the Ultimatum Game because we instinctively feel the need to reject dismal offers in order to keep our self-esteem. This self-esteem helps us to acquire a reputation that is beneficial in future encounters.
200912_4-RC_4_24
[ "one that requires two strangers to develop trust in each other", "responsible for overturning a basic assumption of theoretical economics", "a situation that elicits unpredictable results", "a type of one-shot, anonymous interaction", "proof that our emotional apparatus has been shaped by millions of years of living in small groups" ]
3
The passage implies that the Ultimatum Game is
In an experiment, two strangers are given the opportunity to share $100, subject to the following constraints: One person—the "proposer" —is to suggest how to divide the money and can make only one such proposal. The other person—the "responder" — must either accept or reject the offer without qualification. Both parties know that if the offer is accepted, the money will be split as agreed, but if the offer is rejected, neither will receive anything. This scenario is called the Ultimatum Game. Researchers have conducted it numerous times with a wide variety of volunteers. Many participants in the role of the proposer seem instinctively to feel that they should offer 50 percent to the responder, because such a division is "fair" and therefore likely to be accepted. Two-thirds of proposers offer responders between 40 and 50 percent. Only 4 in 100 offer less than 20 percent. Offering such a small amount is quite risky; most responders reject such offers. This is a puzzle: Why would anyone reject an offer as too small? Responders who reject an offer receive nothing, so if one assumes—as theoretical economics traditionally has—that people make economic decisions primarily out of rational self-interest, one would expect that an individual would accept any offer. Some theorists explain the insistence on fair divisions in the Ultimatum Game by citing our prehistoric ancestors' need for the support of a strong group. Small groups of hunter-gatherers depended for survival on their members' strengths. It is counterproductive to outcompete rivals within one's group to the point where one can no longer depend on them in contests with other groups. But this hypothesis at best explains why proposers offer large amounts, not why responders reject low offers. A more compelling explanation is that our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are therefore not finely tuned to one-time, strictly anonymous interactions. In real life we expect our friends and neighbors to notice our decisions. If people know that someone is content with a small share, they are likely to make that person low offers. But if someone is known to angrily reject low offers, others have an incentive to make that person high offers. Consequently, evolution should have favored angry responses to low offers; if one regularly receives fair offers when food is divided, one is more likely to survive. Because one-shot interactions were rare during human evolution, our emotions do not discriminate between one-shot and repeated interactions. Therefore, we respond emotionally to low offers in the Ultimatum Game because we instinctively feel the need to reject dismal offers in order to keep our self-esteem. This self-esteem helps us to acquire a reputation that is beneficial in future encounters.
200912_4-RC_4_25
[ "survey existing interpretations of the puzzling results of an experiment", "show how two theories that attempt to explain the puzzling results of an experiment complement each other", "argue that the results of an experiment, while puzzling, are valid", "offer a plausible explanation for the puzzling results of an experiment", "defend an experiment against criticism that methodological flaws caused its puzzling results" ]
3
The author's primary purpose in the passage is to
In an experiment, two strangers are given the opportunity to share $100, subject to the following constraints: One person—the "proposer" —is to suggest how to divide the money and can make only one such proposal. The other person—the "responder" — must either accept or reject the offer without qualification. Both parties know that if the offer is accepted, the money will be split as agreed, but if the offer is rejected, neither will receive anything. This scenario is called the Ultimatum Game. Researchers have conducted it numerous times with a wide variety of volunteers. Many participants in the role of the proposer seem instinctively to feel that they should offer 50 percent to the responder, because such a division is "fair" and therefore likely to be accepted. Two-thirds of proposers offer responders between 40 and 50 percent. Only 4 in 100 offer less than 20 percent. Offering such a small amount is quite risky; most responders reject such offers. This is a puzzle: Why would anyone reject an offer as too small? Responders who reject an offer receive nothing, so if one assumes—as theoretical economics traditionally has—that people make economic decisions primarily out of rational self-interest, one would expect that an individual would accept any offer. Some theorists explain the insistence on fair divisions in the Ultimatum Game by citing our prehistoric ancestors' need for the support of a strong group. Small groups of hunter-gatherers depended for survival on their members' strengths. It is counterproductive to outcompete rivals within one's group to the point where one can no longer depend on them in contests with other groups. But this hypothesis at best explains why proposers offer large amounts, not why responders reject low offers. A more compelling explanation is that our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are therefore not finely tuned to one-time, strictly anonymous interactions. In real life we expect our friends and neighbors to notice our decisions. If people know that someone is content with a small share, they are likely to make that person low offers. But if someone is known to angrily reject low offers, others have an incentive to make that person high offers. Consequently, evolution should have favored angry responses to low offers; if one regularly receives fair offers when food is divided, one is more likely to survive. Because one-shot interactions were rare during human evolution, our emotions do not discriminate between one-shot and repeated interactions. Therefore, we respond emotionally to low offers in the Ultimatum Game because we instinctively feel the need to reject dismal offers in order to keep our self-esteem. This self-esteem helps us to acquire a reputation that is beneficial in future encounters.
200912_4-RC_4_26
[ "Contrary to the assumptions of theoretical economics, human beings do not act primarily out of self-interest.", "Unfortunately, one-time, anonymous interactions are becoming increasingly common in contemporary society.", "The instinctive urge to acquire a favorable reputation may also help to explain the desire of many proposers in the Ultimatum Game to make \"fair\" offers.", "High self-esteem and a positive reputation offer individuals living in small groups many other benefits as well.", "The behavior of participants in the Ultimatum Game sheds light on the question of what constitutes a \"fair\" division." ]
2
Which one of the following sentences would most logically conclude the final paragraph of the passage?
In an experiment, two strangers are given the opportunity to share $100, subject to the following constraints: One person—the "proposer" —is to suggest how to divide the money and can make only one such proposal. The other person—the "responder" — must either accept or reject the offer without qualification. Both parties know that if the offer is accepted, the money will be split as agreed, but if the offer is rejected, neither will receive anything. This scenario is called the Ultimatum Game. Researchers have conducted it numerous times with a wide variety of volunteers. Many participants in the role of the proposer seem instinctively to feel that they should offer 50 percent to the responder, because such a division is "fair" and therefore likely to be accepted. Two-thirds of proposers offer responders between 40 and 50 percent. Only 4 in 100 offer less than 20 percent. Offering such a small amount is quite risky; most responders reject such offers. This is a puzzle: Why would anyone reject an offer as too small? Responders who reject an offer receive nothing, so if one assumes—as theoretical economics traditionally has—that people make economic decisions primarily out of rational self-interest, one would expect that an individual would accept any offer. Some theorists explain the insistence on fair divisions in the Ultimatum Game by citing our prehistoric ancestors' need for the support of a strong group. Small groups of hunter-gatherers depended for survival on their members' strengths. It is counterproductive to outcompete rivals within one's group to the point where one can no longer depend on them in contests with other groups. But this hypothesis at best explains why proposers offer large amounts, not why responders reject low offers. A more compelling explanation is that our emotional apparatus has been shaped by millions of years of living in small groups, where it is hard to keep secrets. Our emotions are therefore not finely tuned to one-time, strictly anonymous interactions. In real life we expect our friends and neighbors to notice our decisions. If people know that someone is content with a small share, they are likely to make that person low offers. But if someone is known to angrily reject low offers, others have an incentive to make that person high offers. Consequently, evolution should have favored angry responses to low offers; if one regularly receives fair offers when food is divided, one is more likely to survive. Because one-shot interactions were rare during human evolution, our emotions do not discriminate between one-shot and repeated interactions. Therefore, we respond emotionally to low offers in the Ultimatum Game because we instinctively feel the need to reject dismal offers in order to keep our self-esteem. This self-esteem helps us to acquire a reputation that is beneficial in future encounters.
200912_4-RC_4_27
[ "our prehistoric ancestors often belonged to large groups of more than a hundred people", "in many prehistoric cultures, there were hierarchies within groups that dictated which allocations of goods were to be considered fair and which were not", "it is just as difficult to keep secrets in relatively large social groups as it is in small social groups", "it is just as counterproductive to a small social group to allow oneself to be outcompeted by one's rivals within the group as it is to outcompete those rivals", "in many social groups, there is a mutual understanding among the group's members that allocations of goods will be based on individual needs as opposed to equal shares" ]
3
In the context of the passage, the author would be most likely to consider the explanation in the third paragraph more favorably if it were shown that
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_1
[ "In their critique of policies that promote suburban sprawl, the New Urbanists neglect to consider the interests and values of those who prefer suburban lifestyles.", "The New Urbanists hold that suburban sprawl inhibits social interaction among people of diverse economic circumstances, and they advocate specific reforms of zoning laws as a solution to this problem.", "The New Urbanists argue that most people find that life in small urban neighborhoods is generally more gratifying than life in a suburban environment.", "The New Urbanists hold that suburban sprawl has a corrosive effect on community life, and as an alternative they advocate development modeled on small urban neighborhoods.", "The New Urbanists analyze suburban sprawl as a phenomenon that results from short-sighted traffic policies and advocate changes to these traffic policies as a means of reducing the negative effects of sprawl." ]
3
Which one of the following most accurately expresses the main point of the passage?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_2
[ "It imposes an extra financial burden on the residents of sprawling suburbs, thus detracting from the advantages of suburban life.", "It detracts from the amount of time that people could otherwise devote to productive employment.", "It increases the amount of time people spend in situations in which antisocial behavior occurs.", "It produces significant amounts of air pollution and thus tends to harm the quality of people's lives.", "It decreases the amount of time that parents spend in enjoyable interactions with their children." ]
2
According to the passage, the New Urbanists cite which one of the following as a detrimental result of the need for people to travel extensively every day by automobile?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_3
[ "The primary factor affecting a neighborhood's conduciveness to the maintenance of civility is the amount of time required to get from one place to another.", "Private citizens in suburbs have little opportunity to influence the long-term effects of zoning policies enacted by public officials.", "People who live in suburban neighborhoods usually have little difficulty finding easily accessible jobs that do not require commuting to urban centers.", "The spatial configuration of suburban neighborhoods both influences and is influenced by the attitudes of those who live in them.", "Although people have a right to their own values, personal values should not affect the ways in which neighborhoods are designed." ]
3
The passage most strongly suggests that the New Urbanists would agree with which one of the following statements?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_4
[ "They are intended to be understood in almost identical ways, the only significant difference being that one is plural and the other is singular.", "The former is intended to refer to dwellings— and their inhabitants—that happen to be clustered together in particular areas; in the latter, the author means that a group of people have a sense of belonging together.", "In the former, the author means that the groups referred to are to be defined in terms of the interests of their members; the latter is intended to refer generically to a group of people who have something else in common.", "The former is intended to refer to groups of people whose members have professional or political ties to one another; the latter is intended to refer to a geographical area in which people live in close proximity to one another.", "In the former, the author means that there are informal personal ties among members of a group of people; the latter is intended to indicate that a group of people have similar backgrounds and lifestyles." ]
1
Which one of the following most accurately describes the author's use of the word "communities" in line 2 and "community" in line 15?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_5
[ "Most people who spend more time than they would like getting from one daily task to another live in central areas of large cities.", "Most people who often drive long distances for shopping and entertainment live in small towns rather than in suburban areas surrounding large cities.", "Most people who have easy access to shopping and entertainment do not live in suburban areas.", "Most people who choose to live in sprawling suburbs do so because comparable housing in neighborhoods that do not require extensive automobile travel is more expensive.", "Most people who vote in municipal elections do not cast their votes on the basis of candidates' positions on zoning policies." ]
3
Which one of the following, if true, would most weaken the position that the passage attributes to critics of the New Urbanists?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_6
[ "The need for zoning laws to help regulate traffic flow would eventually be eliminated.", "There would be a decrease in the percentage of suburban buildings that contain two or more apartments.", "The amount of time that residents of suburbs spend traveling to the central business districts of cities for work and shopping would increase.", "The need for coordination of zoning policies between large-city governments and governments of nearby suburban communities would be eliminated.", "There would be an increase in the per capita number of grocery stores and schools in those suburban communities." ]
4
The passage most strongly suggests that which one of the following would occur if new housing subdivisions in suburban communities were built in accordance with the recommendations of Duany and his colleagues?
Over the past 50 years, expansive, low-density communities have proliferated at the edges of many cities in the United States and Canada, creating a phenomenon known as suburban sprawl. Andres Duany, Elizabeth Plater-Zyberk, and Jeff Speck, a group of prominent town planners belonging to a movement called New Urbanism, contend that suburban sprawl contributes to the decline of civic life and civility. For reasons involving the flow of automobile traffic, they note, zoning laws usually dictate that suburban homes, stores, businesses, and schools be built in separate areas, and this separation robs people of communal space where they can interact and get to know one another. It is as difficult to imagine the concept of community without a town square or local pub, these town planners contend, as it is to imagine the concept of family independent of the home. Suburban housing subdivisions, Duany and his colleagues add, usually contain homes identical not only in appearance but also in price, resulting in a de facto economic segregation of residential neighborhoods. Children growing up in these neighborhoods, whatever their economic circumstances, are certain to be ill prepared for life in a diverse society. Moreover, because the widely separated suburban homes and businesses are connected only by "collector roads," residents are forced to drive, often in heavy traffic, in order to perform many daily tasks. Time that would in a town center involve social interaction within a physical public realm is now spent inside the automobile, where people cease to be community members and instead become motorists, competing for road space, often acting antisocially. Pedestrians rarely act in this manner toward each other. Duany and his colleagues advocate development based on early-twentieth-century urban neighborhoods that mix housing of different prices and offer residents a "gratifying public realm" that includes narrow, tree-lined streets, parks, corner grocery stores, cafes, small neighborhood schools, all within walking distance. This, they believe, would give people of diverse backgrounds and lifestyles an opportunity to interact and thus develop mutual respect. Opponents of New Urbanism claim that migration to sprawling suburbs is an expression of people's legitimate desire to secure the enjoyment and personal mobility provided by the automobile and the lifestyle that it makes possible. However, the New Urbanists do not question people's right to their own values; instead, they suggest that we should take a more critical view of these values and of the sprawl-conducive zoning and subdivision policies that reflect them. New Urbanists are fundamentally concerned with the long-term social costs of the now-prevailing attitude that individual mobility, consumption, and wealth should be valued absolutely, regardless of their impact on community life.
201006_4-RC_1_7
[ "Most of those who buy houses in sprawling suburbs do not pay drastically less than they can afford.", "Zoning regulations often cause economically uniform suburbs to become economically diverse.", "City dwellers who do not frequently travel in automobiles often have feelings of hostility toward motorists.", "Few residents of suburbs are aware of the potential health benefits of walking, instead of driving, to carry out daily tasks.", "People generally prefer to live in houses that look very similar to most of the other houses around them." ]
0
The second paragraph most strongly supports the inference that the New Urbanists make which one of the following assumptions?
Passage A In ancient Greece, Aristotle documented the ability of foraging honeybees to recruit nestmates to a good food source. He did not speculate on how the communication occurred, but he and naturalists since then have observed that a bee that finds a new food source returns to the nest and "dances" for its nestmates. In the 1940s, von Frisch and colleagues discovered a pattern in the dance. They observed a foraging honeybee's dance, deciphered it, and thereby deduced the location of the food source the bee had discovered. Yet questions still remained regarding the precise mechanism used to transmit that information. In the 1960s, Wenner and Esch each discovered independently that dancing honeybees emit low-frequency sounds, which we now know to come from wing vibrations. Both researchers reasoned that this might explain the bees' ability to communicate effectively even in completely dark nests. But at that time many scientists mistakenly believed that honeybees lack hearing, so the issue remained unresolved. Wenner subsequently proposed that smell rather than hearing was the key to honeybee communication. He hypothesized that honeybees derive information not from sound, but from odors the forager conveys from the food source. Yet Gould has shown that foragers can dispatch bees to sites they had not actually visited, something that would not be possible if odor were in fact necessary to bees' communication. Finally, using a honeybee robot to simulate the forager's dance, Kirchner and Michelsen showed that sounds emitted during the forager's dance do indeed play an essential role in conveying information about the food's location. Passage B All animals communicate in some sense. Bees dance, ants leave trails, some fish emit high-voltage signals. But some species-bees, birds, and primates, for example-communicate symbolically. In an experiment with vervet monkeys in the wild, Seyfarth, Cheney, and Marler found that prerecorded vervet alarm calls from a loudspeaker elicited the same response as did naturally produced vervet calls alerting the group to the presence of a predator of a particular type. Vervets looked upward upon hearing an eagle alarm call, and they scanned the ground below in response to a snake alarm call. These responses suggest that each alarm call represents, for vervets, a specific type of predator. Karl von Frisch was first to crack the code of the honeybee's dance, which he described as "language." The dance symbolically represents the distance, direction, and quality of newly discovered food. Adrian Wenner and others believed that bees rely on olfactory cues, as well as the dance, to find a food source, but this has turned out not to be so. While it is true that bees have a simple nervous system, they do not automatically follow just any information. Biologist James Gould trained foraging bees to find food in a boat placed in the middle of a lake and then allowed them to return to the hive to indicate this new location. He found that hive members ignored the foragers' instructions, presumably because no pollinating flowers grow in such a place.
201006_4-RC_2_8
[ "arguing that certain nonhuman animals possess human-like intelligence", "illustrating the sophistication with which certain primates communicate", "describing certain scientific studies concerned with animal communication", "airing a scientific controversy over the function of the honeybee's dance", "analyzing the conditions a symbolic system must meet in order to be considered a language" ]
2
The passages have which one of the following aims in common?
Passage A In ancient Greece, Aristotle documented the ability of foraging honeybees to recruit nestmates to a good food source. He did not speculate on how the communication occurred, but he and naturalists since then have observed that a bee that finds a new food source returns to the nest and "dances" for its nestmates. In the 1940s, von Frisch and colleagues discovered a pattern in the dance. They observed a foraging honeybee's dance, deciphered it, and thereby deduced the location of the food source the bee had discovered. Yet questions still remained regarding the precise mechanism used to transmit that information. In the 1960s, Wenner and Esch each discovered independently that dancing honeybees emit low-frequency sounds, which we now know to come from wing vibrations. Both researchers reasoned that this might explain the bees' ability to communicate effectively even in completely dark nests. But at that time many scientists mistakenly believed that honeybees lack hearing, so the issue remained unresolved. Wenner subsequently proposed that smell rather than hearing was the key to honeybee communication. He hypothesized that honeybees derive information not from sound, but from odors the forager conveys from the food source. Yet Gould has shown that foragers can dispatch bees to sites they had not actually visited, something that would not be possible if odor were in fact necessary to bees' communication. Finally, using a honeybee robot to simulate the forager's dance, Kirchner and Michelsen showed that sounds emitted during the forager's dance do indeed play an essential role in conveying information about the food's location. Passage B All animals communicate in some sense. Bees dance, ants leave trails, some fish emit high-voltage signals. But some species-bees, birds, and primates, for example-communicate symbolically. In an experiment with vervet monkeys in the wild, Seyfarth, Cheney, and Marler found that prerecorded vervet alarm calls from a loudspeaker elicited the same response as did naturally produced vervet calls alerting the group to the presence of a predator of a particular type. Vervets looked upward upon hearing an eagle alarm call, and they scanned the ground below in response to a snake alarm call. These responses suggest that each alarm call represents, for vervets, a specific type of predator. Karl von Frisch was first to crack the code of the honeybee's dance, which he described as "language." The dance symbolically represents the distance, direction, and quality of newly discovered food. Adrian Wenner and others believed that bees rely on olfactory cues, as well as the dance, to find a food source, but this has turned out not to be so. While it is true that bees have a simple nervous system, they do not automatically follow just any information. Biologist James Gould trained foraging bees to find food in a boat placed in the middle of a lake and then allowed them to return to the hive to indicate this new location. He found that hive members ignored the foragers' instructions, presumably because no pollinating flowers grow in such a place.
201006_4-RC_2_9
[ "Passage A is concerned solely with honeybee communication, whereas passage B is concerned with other forms of animal communication as well.", "Passage A discusses evidence adduced by scientists in support of certain claims, whereas passage B merely presents some of those claims without discussing the support that has been adduced for them.", "Passage B is entirely about recent theories of honeybee communication, whereas passage A outlines the historic development of theories of honeybee communication.", "Passage B is concerned with explaining the distinction between symbolic and nonsymbolic communication, whereas passage A, though making use of the distinction, does not explain it.", "Passage B is concerned with gaining insight into human communication by considering certain types of nonhuman communication, whereas passage A is concerned with these types of nonhuman communication in their own right." ]
0
Which one of the following statements most accurately characterizes a difference between the two passages?
Passage A In ancient Greece, Aristotle documented the ability of foraging honeybees to recruit nestmates to a good food source. He did not speculate on how the communication occurred, but he and naturalists since then have observed that a bee that finds a new food source returns to the nest and "dances" for its nestmates. In the 1940s, von Frisch and colleagues discovered a pattern in the dance. They observed a foraging honeybee's dance, deciphered it, and thereby deduced the location of the food source the bee had discovered. Yet questions still remained regarding the precise mechanism used to transmit that information. In the 1960s, Wenner and Esch each discovered independently that dancing honeybees emit low-frequency sounds, which we now know to come from wing vibrations. Both researchers reasoned that this might explain the bees' ability to communicate effectively even in completely dark nests. But at that time many scientists mistakenly believed that honeybees lack hearing, so the issue remained unresolved. Wenner subsequently proposed that smell rather than hearing was the key to honeybee communication. He hypothesized that honeybees derive information not from sound, but from odors the forager conveys from the food source. Yet Gould has shown that foragers can dispatch bees to sites they had not actually visited, something that would not be possible if odor were in fact necessary to bees' communication. Finally, using a honeybee robot to simulate the forager's dance, Kirchner and Michelsen showed that sounds emitted during the forager's dance do indeed play an essential role in conveying information about the food's location. Passage B All animals communicate in some sense. Bees dance, ants leave trails, some fish emit high-voltage signals. But some species-bees, birds, and primates, for example-communicate symbolically. In an experiment with vervet monkeys in the wild, Seyfarth, Cheney, and Marler found that prerecorded vervet alarm calls from a loudspeaker elicited the same response as did naturally produced vervet calls alerting the group to the presence of a predator of a particular type. Vervets looked upward upon hearing an eagle alarm call, and they scanned the ground below in response to a snake alarm call. These responses suggest that each alarm call represents, for vervets, a specific type of predator. Karl von Frisch was first to crack the code of the honeybee's dance, which he described as "language." The dance symbolically represents the distance, direction, and quality of newly discovered food. Adrian Wenner and others believed that bees rely on olfactory cues, as well as the dance, to find a food source, but this has turned out not to be so. While it is true that bees have a simple nervous system, they do not automatically follow just any information. Biologist James Gould trained foraging bees to find food in a boat placed in the middle of a lake and then allowed them to return to the hive to indicate this new location. He found that hive members ignored the foragers' instructions, presumably because no pollinating flowers grow in such a place.
201006_4-RC_2_10
[ "When a forager honeybee does not communicate olfactory information to its nestmates, they will often disregard the forager's directions and go to sites of their own choosing.", "Forager honeybees instinctively know where pollinating flowers usually grow and will not dispatch their nestmates to any other places.", "Only experienced forager honeybees are able to locate the best food sources.", "A forager's dances can draw other honeybees to sites that the forager has not visited and can fail to draw other honeybees to sites that the forager has visited.", "Forager honeybees can communicate with their nestmates about a newly discovered food source by leaving a trail from the food source to the honeybee nest." ]
3
Which one of the following statements is most strongly supported by Gould's research, as reported in the two passages?
Passage A In ancient Greece, Aristotle documented the ability of foraging honeybees to recruit nestmates to a good food source. He did not speculate on how the communication occurred, but he and naturalists since then have observed that a bee that finds a new food source returns to the nest and "dances" for its nestmates. In the 1940s, von Frisch and colleagues discovered a pattern in the dance. They observed a foraging honeybee's dance, deciphered it, and thereby deduced the location of the food source the bee had discovered. Yet questions still remained regarding the precise mechanism used to transmit that information. In the 1960s, Wenner and Esch each discovered independently that dancing honeybees emit low-frequency sounds, which we now know to come from wing vibrations. Both researchers reasoned that this might explain the bees' ability to communicate effectively even in completely dark nests. But at that time many scientists mistakenly believed that honeybees lack hearing, so the issue remained unresolved. Wenner subsequently proposed that smell rather than hearing was the key to honeybee communication. He hypothesized that honeybees derive information not from sound, but from odors the forager conveys from the food source. Yet Gould has shown that foragers can dispatch bees to sites they had not actually visited, something that would not be possible if odor were in fact necessary to bees' communication. Finally, using a honeybee robot to simulate the forager's dance, Kirchner and Michelsen showed that sounds emitted during the forager's dance do indeed play an essential role in conveying information about the food's location. Passage B All animals communicate in some sense. Bees dance, ants leave trails, some fish emit high-voltage signals. But some species-bees, birds, and primates, for example-communicate symbolically. In an experiment with vervet monkeys in the wild, Seyfarth, Cheney, and Marler found that prerecorded vervet alarm calls from a loudspeaker elicited the same response as did naturally produced vervet calls alerting the group to the presence of a predator of a particular type. Vervets looked upward upon hearing an eagle alarm call, and they scanned the ground below in response to a snake alarm call. These responses suggest that each alarm call represents, for vervets, a specific type of predator. Karl von Frisch was first to crack the code of the honeybee's dance, which he described as "language." The dance symbolically represents the distance, direction, and quality of newly discovered food. Adrian Wenner and others believed that bees rely on olfactory cues, as well as the dance, to find a food source, but this has turned out not to be so. While it is true that bees have a simple nervous system, they do not automatically follow just any information. Biologist James Gould trained foraging bees to find food in a boat placed in the middle of a lake and then allowed them to return to the hive to indicate this new location. He found that hive members ignored the foragers' instructions, presumably because no pollinating flowers grow in such a place.
201006_4-RC_2_11
[ "Honeybees will ignore the instructions conveyed in the forager's dance if they are unable to detect odors from the food source.", "Wenner and Esch established that both sound and odor play a vital role in most honeybee communication.", "Most animal species can communicate symbolically in some form or other.", "The work of von Frisch was instrumental in answering fundamental questions about how honeybees communicate.", "Inexperienced forager honeybees that dance to communicate with other bees in their nest learn the intricacies of the dance from more experienced foragers." ]
3
It can be inferred from the passages that the author of passage A and the author of passage B would accept which one of the following statements?
Passage A In ancient Greece, Aristotle documented the ability of foraging honeybees to recruit nestmates to a good food source. He did not speculate on how the communication occurred, but he and naturalists since then have observed that a bee that finds a new food source returns to the nest and "dances" for its nestmates. In the 1940s, von Frisch and colleagues discovered a pattern in the dance. They observed a foraging honeybee's dance, deciphered it, and thereby deduced the location of the food source the bee had discovered. Yet questions still remained regarding the precise mechanism used to transmit that information. In the 1960s, Wenner and Esch each discovered independently that dancing honeybees emit low-frequency sounds, which we now know to come from wing vibrations. Both researchers reasoned that this might explain the bees' ability to communicate effectively even in completely dark nests. But at that time many scientists mistakenly believed that honeybees lack hearing, so the issue remained unresolved. Wenner subsequently proposed that smell rather than hearing was the key to honeybee communication. He hypothesized that honeybees derive information not from sound, but from odors the forager conveys from the food source. Yet Gould has shown that foragers can dispatch bees to sites they had not actually visited, something that would not be possible if odor were in fact necessary to bees' communication. Finally, using a honeybee robot to simulate the forager's dance, Kirchner and Michelsen showed that sounds emitted during the forager's dance do indeed play an essential role in conveying information about the food's location. Passage B All animals communicate in some sense. Bees dance, ants leave trails, some fish emit high-voltage signals. But some species-bees, birds, and primates, for example-communicate symbolically. In an experiment with vervet monkeys in the wild, Seyfarth, Cheney, and Marler found that prerecorded vervet alarm calls from a loudspeaker elicited the same response as did naturally produced vervet calls alerting the group to the presence of a predator of a particular type. Vervets looked upward upon hearing an eagle alarm call, and they scanned the ground below in response to a snake alarm call. These responses suggest that each alarm call represents, for vervets, a specific type of predator. Karl von Frisch was first to crack the code of the honeybee's dance, which he described as "language." The dance symbolically represents the distance, direction, and quality of newly discovered food. Adrian Wenner and others believed that bees rely on olfactory cues, as well as the dance, to find a food source, but this has turned out not to be so. While it is true that bees have a simple nervous system, they do not automatically follow just any information. Biologist James Gould trained foraging bees to find food in a boat placed in the middle of a lake and then allowed them to return to the hive to indicate this new location. He found that hive members ignored the foragers' instructions, presumably because no pollinating flowers grow in such a place.
201006_4-RC_2_12
[ "Passage A discusses and rejects a position that is put forth in passage B.", "Passage A gives several examples of a phenomenon for which passage B gives only one example.", "Passage A is concerned in its entirety with a phenomenon that passage B discusses in support of a more general thesis.", "Passage A proposes a scientific explanation for a phenomenon that passage B argues cannot be plausibly explained.", "Passage A provides a historical account of the origins of a phenomenon that is the primary concern of passage B." ]
2
Which one of the following most accurately describes a relationship between the two passages?
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_13
[ "Some theater historians have begun to challenge the once widely accepted view that in creating the Teatro Campesino, Luis Valdez was largely uninfluenced by earlier historical forms.", "In crediting Luis Valdez with founding the Chicano theater movement, theater historians have neglected the role of César Chávez in its early development.", "Although the creation of the early material of the Teatro Campesino was a collective accomplishment, Luis Valdez's efforts and expertise were essential factors in determining the form it took.", "The success of the early Teatro Campesino depended on the special insights and talents of the amateur performers who were recruited by Luis Valdez to participate in creating actos.", "Although, as Yolanda Broyles-González has pointed out, the Teatro Campesino was a collective endeavor, Luis Valdez's political and academic connections helped bring it recognition." ]
2
Which one of the following most accurately expresses the main point of the passage?
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_14
[ "how little physical distance there was between the performers in the late 1960s actos and their audiences", "the sense of intimacy created by the performers' technique of addressing many of their lines directly to the audience", "the ease with which the Teatro Campesino members were able to develop actos based on their own experiences", "how closely the director and performers of the Teatro Campesino worked together to build a repertoire of actos", "how vividly the actos conveyed the performers' experiences to their audiences" ]
4
The author uses the word "immediacy" (line 39) most likely in order to express
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_15
[ "It helps explain both a motivation of those who developed the first actos and an important aspect of their subject matter.", "It introduces a major obstacle that Valdez had to overcome in gaining public acceptance of the work of the Teatro Campesino.", "It anticipates and counters a possible objection to the author's view that the actos developed by Teatro Campesino were effective as political theater.", "It provides an example of the type of topic on which scholars of Mexican American history have typically focused to the exclusion of theater history.", "It helps explain why theater historians, in their discussions of Valdez, have often treated him as though he were individually responsible for inventing actos as a genre." ]
0
The second sentence of the passage functions primarily in which one of the following ways?
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_16
[ "both had roots in theater in the European tradition", "both were studied by the San Francisco Mime Troupe", "both were initially performed on farms", "both often involved satire", "both were part of union organizing drives" ]
3
The passage indicates that the early actos of the Teatro Campesino and the carpas were similar in that
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_17
[ "As a theatrical model, the carpas of the early twentieth century were ill-suited to the type of theater that he and the Teatro Campesino were trying to create.", "César Chávez should have done more to support the efforts of the Teatro Campesino to use theater to organize striking farm workers.", "Avant-garde theater in the European tradition is largely irrelevant to the theatrical expression of the concerns of a mainly working-class audience.", "Actors do not require formal training in order to achieve effective and artistically successful theatrical performances.", "The aesthetic aspects of a theatrical work should be evaluated independently of its political ramifications." ]
3
It can be inferred from the passage that Valdez most likely held which one of the following views?
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_18
[ "the influences that shaped carpas as a dramatic genre", "the motives of theater historians in exaggerating the originality of Valdez", "the significance of carpas for the development of the genre of the acto", "the extent of Valdez's acquaintance with carpas as a dramatic form", "the role of the European tradition in shaping Valdez's contribution to the development of actos" ]
2
Based on the passage, it can be concluded that the author and Broyles-González hold essentially the same attitude toward
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_19
[ "Its efforts to organize farm workers eventually won the acceptance of a few farm owners in California.", "It included among its members a number of individuals who, like Valdez, had previously belonged to the San Francisco Mime Troupe.", "It did not play a major role in the earliest efforts of the United Farm Workers Union to achieve international recognition.", "Although its first performances were entirely in Spanish, it eventually gave some performances partially in English, for the benefit of non-Spanish-speaking audiences.", "Its work drew praise not only from critics in the United States but from critics in Mexico as well." ]
2
The information in the passage most strongly supports which one of the following statements regarding the Teatro Campesino?
Most scholars of Mexican American history mark César Chávez's unionizing efforts among Mexican and Mexican American farm laborers in California as the beginning of Chicano political activism in the 1960s. By 1965, Chávez's United Farm Workers Union gained international recognition by initiating a worldwide boycott of grapes in an effort to get growers in California to sign union contracts. The year 1965 also marks the birth of contemporary Chicano theater, for that is the year Luis Valdez approached Chávez about using theater to organize farm workers. Valdez and the members of the resulting Teatro Campesino are generally credited by scholars as having initiated the Chicano theater movement, a movement that would reach its apex in the 1970s. In the fall of 1965, Valdez gathered a group of striking farm workers and asked them to talk about their working conditions. A former farm worker himself, Valdez was no stranger to the players in the daily drama that was fieldwork. He asked people to illustrate what happened on the picket lines, and the less timid in the audience delighted in acting out their ridicule of the strikebreakers. Using the farm workers' basic improvisations, Valdez guided the group toward the creation of what he termed "actos," skits or sketches whose roots scholars have traced to various sources that had influenced Valdez as a student and as a member of the San Francisco Mime Troupe. Expanding beyond the initial setting of flatbed-truck stages at the fields' edges, the acto became the quintessential form of Chicano theater in the 1960s. According to Valdez, the acto should suggest a solution to the problems exposed in the brief comic statement, and, as with any good political theater, it should satirize the opposition and inspire the audience to social action. Because actos were based on participants' personal experiences, they had palpable immediacy. In her book El Teatro Campesino, Yolanda Broyles-González rightly criticizes theater historians for having tended to credit Valdez individually with inventing actos as a genre, as if the striking farm workers' improvisational talent had depended entirely on his vision and expertise for the form it took. She traces especially the actos' connections to a similar genre of informal, often satirical shows known as carpas that were performed in tents to mainly working-class audiences. Carpas had flourished earlier in the twentieth century in the border area of Mexico and the United States. Many participants in the formation of the Teatro no doubt had substantial cultural links to this tradition and likely adapted it to their improvisations. The early development of the Teatro Campesino was, in fact, a collective accomplishment; still, Valdez's artistic contribution was a crucial one, for the resulting actos were neither carpas nor theater in the European tradition of Valdez's academic training, but a distinctive genre with connections to both.
201006_4-RC_3_20
[ "The carpas tradition has been widely discussed and analyzed by both U.S. and Mexican theater historians concerned with theatrical performance styles and methods.", "Comedy was a prominent feature of Chicano theater in the 1960s.", "In directing the actos of the Teatro Campesino, Valdez went to great lengths to simulate or recreate certain aspects of what audiences had experienced in the carpas.", "Many of the earliest actos were based on scripts composed by Valdez, which the farm-worker actors modified to suit their own diverse aesthetic and pragmatic interests.", "By the early 1970s, Valdez was using actos as the basis for other theatrical endeavors and was no longer directly associated with the Teatro Campesino." ]
1
The passage most strongly supports which one of the following?
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_21
[ "People who join together to share the costs of purchasing lottery tickets on a regular basis agree to share any eventual proceeds from a lottery drawing in proportion to the amounts they contributed to tickets purchased for that drawing.", "A consulting firm reviews a company's operations. The consulting firm will receive payment only if it can substantially reduce the company's operating expenses, in which case it will be paid double its usual fee.", "The returns that accrue from the assumption of a large financial risk by members of a business partnership formed to develop and market a new invention are divided among them in proportion to the amount of financial risk each assumed.", "The cost of an insurance policy is determined by reference to the likelihood and magnitude of an eventual loss covered by the insurance policy and the administrative and marketing costs involved in marketing and servicing the insurance policy.", "A person purchasing a property receives a loan for the purchase from the seller. In order to reduce risk, the seller requires the buyer to pay for an insurance policy that will pay off the loan if the buyer is unable to do so." ]
1
As described in the passage, the uplift fee agreements that the LRCWA's report recommends are most closely analogous to which one of the following arrangements?
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_22
[ "Contingency-fee agreements serve the purpose of transferring the risk of pursuing a legal action from the client to the lawyer.", "Contingency-fee agreements of the kind the LRCWA's report recommends would normally not result in lawyers being paid larger fees than they deserve.", "At least some of the recommendations in the LRCWA's report are likely to be incorporated into the legal system in the state of Western Australia.", "Allowing contingency-fee agreements of the sort recommended in the LRCWA's report would not affect lawyers' diligence and commitment to their cases.", "Usually contingency-fee agreements involve an agreement that the fee the lawyer receives will be an agreed-upon percentage of the client's damages." ]
0
The passage states which one of the following?
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_23
[ "defend a proposed reform against criticism", "identify the current shortcomings of a legal system and suggest how these should be remedied", "support the view that a recommended change would actually worsen the situation it was intended to improve", "show that a legal system would not be significantly changed if certain proposed reforms were enacted", "explain a suggested reform and critically evaluate it" ]
4
The author's main purpose in the passage is to
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_24
[ "The length of time that a trial may last is difficult to predict in advance.", "Not all prospective clients would wish to reveal detailed information about their financial circumstances.", "Some factors that may affect the cost of litigation can change after the litigation begins.", "Uplift agreements should only be used as a last resort.", "Investigating whether a client is qualified to enter into an uplift agreement would take time away from investigating the legal issues of the case." ]
2
Which one of the following is given by the passage as a reason for the difficulty a lawyer would have in determining whether—according to the LRCWA's recommendations—a prospective client was qualified to enter into an uplift agreement?
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_25
[ "receiving a payment that is of greater monetary value than the legal services rendered by the lawyer", "receiving a higher portion of the total amount awarded in damages than is reasonable compensation for the professional services rendered and the amount of risk assumed", "receiving a higher proportion of the damages awarded to the client than the client considers fair", "receiving a payment that is higher than the lawyer would have received had the client's case been unsuccessful", "receiving a higher proportion of the damages awarded to the client than the judge or the jury that awarded the damages intended the lawyer to receive" ]
1
The phrase "gaining disproportionately from awards of damages" (lines 18–19) is most likely intended by the author to mean
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_26
[ "be used only when it is reasonable to think that such arrangements will increase lawyers' diligence and commitment to their cases", "be used only in cases in which clients are unlikely to be awarded enormous damages", "be used if the lawyer is not certain that the client seeking to file a lawsuit could pay the lawyer's regular fee if the suit were to be unsuccessful", "not be used in cases in which another type of arrangement is practicable", "not be used except in cases where the lawyer is reasonably sure that the client will win damages sufficiently large to cover the lawyer's fees" ]
3
According to the passage, the LRCWA's report recommended that contingency-fee agreements
In October 1999, the Law Reform Commission of Western Australia (LRCWA) issued its report, "Review of the Civil and Criminal Justice System." Buried within its 400 pages are several important recommendations for introducing contingency fees for lawyers' services into the state of Western Australia. Contingency-fee agreements call for payment only if the lawyer is successful in the case. Because of the lawyer's risk of financial loss, such charges generally exceed regular fees. Although there are various types of contingency-fee arrangements, the LRCWA has recommended that only one type be introduced: "uplift" fee arrangements, which in the case of a successful outcome require the client to pay the lawyer's normal fee plus an agreed-upon additional percentage of that fee. This restriction is intended to prevent lawyers from gaining disproportionately from awards of damagesand thus to ensure that just compensation to plaintiffs is not eroded. A further measure toward this end is found in the recommendation that contingency-fee agreements should be permitted only in cases where two conditions are satisfied: first, the contingency-fee arrangement must be used only as a last resort when all means of avoiding such an arrangement have been exhausted; and second, the lawyer must be satisfied that the client is financially unable to pay the fee in the event that sufficient damages are not awarded. Unfortunately, under this recommendation, lawyers wishing to enter into an uplift fee arrangement would be forced to investigate not only the legal issues affecting any proposed litigation, but also the financial circumstances of the potential client and the probable cost of the litigation. This process would likely be onerous for a number of reasons, not least of which is the fact that the final cost of litigation depends in large part on factors that may change as the case unfolds, such as strategies adopted by the opposing side. In addition to being burdensome for lawyers, the proposal to make contingency-fee agreements available only to the least well-off clients would be unfair to other clients. This restriction would unjustly limit freedom of contract and would, in effect, make certain types of litigation inaccessible to middle-income people or even wealthy people who might not be able to liquidate assets to pay the costs of a trial. More importantly, the primary reasons for entering into contingency-fee agreements hold for all clients. First, they provide financing for the costs of pursuing a legal action. Second, they shift the risk of not recovering those costs, and of not obtaining a damages award that will pay their lawyer's fees, from the client to the lawyer. Finally, given the convergence of the lawyer's interest and the client's interest under a contingency-fee arrangement, it is reasonable to assume that such arrangements increase lawyers' diligence and commitment to their cases.
201006_4-RC_4_27
[ "The proportion of lawsuits filed by the least well-off litigants tends to be higher in areas where uplift fee arrangements have been widely used than in areas in which uplift agreements have not been used.", "Before the LRCWA's recommendations, lawyers in Western Australia generally made a careful evaluation of prospective clients' financial circumstances before accepting cases that might involve complex or protracted litigation.", "There is strong opposition in Western Australia to any legal reform perceived as favoring lawyers, so it is highly unlikely that the LRCWA's recommendations concerning contingency-fee agreements will be implemented.", "The total fees charged by lawyers who successfully litigate cases under uplift fee arrangements are, on average, only marginally higher than the total fees charged by lawyers who litigate cases without contingency agreements.", "In most jurisdictions in which contingency-fee agreements are allowed, those of the uplift variety are used much less often than are other types of contingency-fee agreements." ]
1
Which one of the following, if true, most seriously undermines the author's criticism of the LRCWA's recommendations concerning contingency-fee agreements?
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_1
[ "the likelihood that the document will inspire innovative government programs designed to safeguard human rights", "the ability of the document's drafters to translate abstract ideals into concrete standards", "the compromises that went into producing a version of the document that would garner the approval of all relevant parties", "the fact that the guidelines established by the document are ultimately unenforceable", "the frustration experienced by the document's drafters at stubborn resistance from within the UN hierarchy" ]
3
By referring to the Universal Declaration of Human Rights as "purely programmatic" (line 49) in nature, the author most likely intends to emphasize
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_2
[ "to contrast the different definitions of human rights in the two documents", "to compare the strength of the human rights language in the two documents", "to identify a bureaucratic vocabulary that is common to the two documents", "to highlight what the author believes to be the most important point in each document", "to call attention to a significant difference in the prose styles of the two documents" ]
1
The author most probably quotes directly from both the UN Charter (lines 8–11) and the proposal mentioned in lines 20–22 for which one of the following reasons?
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_3
[ "unbridled enthusiasm", "qualified approval", "absolute neutrality", "reluctant rejection", "strong hostility" ]
1
The author's stance toward the Universal Declaration of Human Rights can best be described as
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_4
[ "It asserts a right to rest and leisure.", "It was drafted after the UN Charter was drafted.", "The UN Commission on Human Rights was charged with producing it.", "It has had no practical consequences.", "It was the first international treaty to explicitly affirm universal respect for human rights." ]
3
According to the passage, each of the following is true of the Universal Declaration of Human Rights EXCEPT:
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_5
[ "The human rights language contained in Article 1 of the UN Charter is so ambiguous as to be almost wholly ineffectual.", "The weaknesses of the Universal Declaration of Human Rights generally outweigh the strengths of the document.", "It was relatively easy for the drafters of the Universal Declaration of Human Rights to reach a consensus concerning the contents of the document.", "The drafters of the Universal Declaration of Human Rights omitted important rights that should be included in a truly comprehensive list of basic human rights.", "The Universal Declaration of Human Rights would be truer to the intentions of its staunchest proponents if UN member countries were required by law to abide by its provisions." ]
4
The author would be most likely to agree with which one of the following statements?
The Universal Declaration of Human Rights (UDHR), approved by the United Nations General Assembly in 1948, was the first international treaty to expressly affirm universal respect for human rights. Prior to 1948 no truly international standard of humanitarian beliefs existed. Although Article 1 of the 1945 UN Charter had been written with the express purpose of obligating the UN to "encourage respect for human rights and for fundamental freedoms for all without distinction as to race, sex, language, or religion," there were members of delegations from various small countries and representatives of several nongovernmental organizations who felt that the language of Article 1 was not strong enough, and that the Charter as a whole did not go far enough in its efforts to guarantee basic human rights. This group lobbied vigorously to strengthen the Charter's human rights provisions and proposed that member states be required "to take separate and joint action and to co-operate with the organization for the promotion of human rights." This would have implied an obligation for member states to act on human rights issues. Ultimately, this proposal and others like it were not adopted; instead, the UDHR was commissioned and drafted. The original mandate for producing the document was given to the UN Commission on Human Rights in February 1946. Between that time and the General Assembly's final approval of the document, the UDHR passed through an elaborate eight-stage drafting process in which it made its way through almost every level of the UN hierarchy. The articles were debated at each stage, and all 30 articles were argued passionately by delegates representing diverse ideologies, traditions, and cultures. The document as it was finally approved set forth the essential principles of freedom and equality for everyone— regardless of sex, race, color, language, religion, political or other opinion, national or social origin, property, birth or other status. It also asserted a number of fundamental human rights, including among others the right to work, the right to rest and leisure, and the right to education. While the UDHR is in many ways a progressive document, it also has weaknesses, the most regrettable of which is its nonbinding legal status. For all its strong language and high ideals, the UDHR remains a resolution of a purely programmatic nature. Nevertheless, the document has led, even if belatedly, to the creation of legally binding human rights conventions, and it clearly deserves recognition as an international standard-setting piece of work, as a set of aspirations to which UN member states are intended to strive, and as a call to arms in the name of humanity, justice, and freedom.
201010_1-RC_1_6
[ "The UN General Assembly authenticates the evidence and then insists upon prompt remedial action on the part of the government of the member state.", "The UN General Assembly stipulates that any proposed response must be unanimously accepted by member states before it can be implemented.", "The UN issues a report critical of the actions of the member state in question and calls for a censure vote in the General Assembly.", "The situation is regarded by the UN as an internal matter that is best left to the discretion of the government of the member state.", "The situation is investigated further by nongovernmental humanitarian organizations that promise to disclose their findings to the public via the international media." ]
0
Suppose that a group of independent journalists has uncovered evidence of human rights abuses being perpetrated by a security agency of a UN member state upon a group of political dissidents. Which one of the following approaches to the situation would most likely be advocated by present-day delegates who share the views of the delegates and representatives mentioned in lines 11–14?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_7
[ "The Disciples at Emmaus, van Meegeren's forgery of a Vermeer, was a failure in both aesthetic and artistic terms.", "The aesthetic value of a work of art is less dependent on the work's visible characteristics than on certain intangible characteristics.", "Forged artworks are artistically inferior to originals because artistic value depends in large part on originality of vision.", "The most skilled forgers can deceive even highly qualified art experts into accepting their work as original.", "Art critics tend to be unreliable judges of the aesthetic and artistic quality of works of art." ]
2
Which one of the following most accurately expresses the main point of the passage?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_8
[ "The judgments of critics who pronounced The Disciples at Emmaus to be aesthetically superb were not invalidated by the revelation that the painting is a forgery.", "The financial value of a work of art depends more on its purely aesthetic qualities than on its originality.", "Museum curators would be better off not taking art critics' opinions into account when attempting to determine whether a work of art is authentic.", "Because it is such a skilled imitation of Vermeer, The Disciples at Emmaus is as artistically successful as are original paintings by artists who are less significant than Vermeer.", "Works of art that have little or no aesthetic value can still be said to be great achievements in artistic terms." ]
0
The passage provides the strongest support for inferring that Lessing holds which one of the following views?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_9
[ "argue that many art critics are inflexible in their judgments", "indicate that the critics who initially praised The Disciples at Emmaus were not as knowledgeable as they appeared", "suggest that the painting may yet turn out to be a genuine Vermeer", "emphasize that the concept of forgery itself is internally incoherent", "illustrate the difficulties that skillfully executed forgeries can pose for art critics" ]
4
In the first paragraph, the author refers to a highly reputed critic's persistence in believing van Meegeren's forgery to be a genuine Vermeer primarily in order to
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_10
[ "lovers of a musical group contemptuously reject a tribute album recorded by various other musicians as a second-rate imitation", "art historians extol the work of a little-known painter as innovative until it is discovered that the painter lived much more recently than was originally thought", "diners at a famous restaurant effusively praise the food as delicious until they learn that the master chef is away for the night", "literary critics enthusiastically applaud a new novel until its author reveals that its central symbols are intended to represent political views that the critics dislike", "movie fans evaluate a particular movie more favorably than they otherwise might have because their favorite actor plays the lead role" ]
2
The reaction described in which one of the following scenarios is most analogous to the reaction of the art critics mentioned in line 13?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_11
[ "It is probable that many paintings currently hanging in important museums are actually forgeries.", "The historical circumstances surrounding the creation of a work are important in assessing the artistic value of that work.", "The greatness of an innovative artist depends on how much influence he or she has on other artists.", "The standards according to which a work is judged to be a forgery tend to vary from one historical period to another.", "An artist who makes use of techniques developed by others cannot be said to be innovative." ]
1
The passage provides the strongest support for inferring that Lessing holds which one of the following views?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_12
[ "In any historical period, the criteria by which a work is classified as a forgery can be a matter of considerable debate.", "An artist who uses techniques that others have developed is most likely a forger.", "A successful forger must originate a new artistic vision.", "Works of art created early in the career of a great artist are more likely than those created later to embody historic innovations.", "A painting can be a forgery even if it is not a copy of a particular original work of art." ]
4
The passage most strongly supports which one of the following statements?
It is commonly assumed that even if some forgeries have aesthetic merit, no forgery has as much as an original by the imitated artist would. Yet even the most prominent art specialists can be duped by a talented artist turned forger into mistaking an almost perfect forgery for an original. For instance, artist Han van Meegeren's The Disciples at Emmaus (1937)—painted under the forged signature of the acclaimed Dutch master Jan Vermeer (1632–1675)— attracted lavish praise from experts as one of Vermeer's finest works. The painting hung in a Rotterdam museum until 1945, when, to the great embarrassment of the critics, van Meegeren revealed its origin. Astonishingly, there was at least one highly reputed critic who persisted in believing it to be a Vermeer even after van Meegeren's confession. Given the experts' initial enthusiasm, some philosophers argue that van Meegeren's painting must have possessed aesthetic characteristics that, in a Vermeer original, would have justified the critics' plaudits. Van Meegeren's Emmaus thus raises difficult questions regarding the status of superbly executed forgeries. Is a forgery inherently inferior as art? How are we justified, if indeed we are, in revising downwards our critical assessment of a work unmasked as a forgery? Philosopher of art Alfred Lessing proposes convincing answers to these questions. A forged work is indeed inferior as art, Lessing argues, but not because of a shortfall in aesthetic qualities strictly defined, that is to say, in the qualities perceptible on the picture's surface. For example, in its composition, its technique, and its brilliant use of color, van Meegeren's work is flawless, even beautiful. Lessing argues instead that the deficiency lies in what might be called the painting's intangible qualities. All art, explains Lessing, involves technique, but not all art involves origination of a new vision, and originality of vision is one of the fundamental qualities by which artistic, as opposed to purely aesthetic, accomplishment is measured. Thus Vermeer is acclaimed for having inaugurated, in the seventeenth century, a new way of seeing, and for pioneering techniques for embodying this new way of seeing through distinctive treatment of light, color, and form. Even if we grant that van Meegeren, with his undoubted mastery of Vermeer's innovative techniques, produced an aesthetically superior painting, he did so about three centuries after Vermeer developed the techniques in question. Whereas Vermeer's origination of these techniques in the seventeenth century represents a truly impressive and historic achievement, van Meegeren's production of The Disciples at Emmaus in the twentieth century presents nothing new or creative to the history of art. Van Meegeren's forgery therefore, for all its aesthetic merits, lacks the historical significance that makes Vermeer's work artistically great.
201010_1-RC_2_13
[ "Many of the most accomplished art forgers have had moderately successful careers as painters of original works.", "Reproductions painted by talented young artists whose traditional training consisted in the copying of masterpieces were often seen as beautiful, but never regarded as great art.", "While experts can detect most forgeries, they can be duped by a talented forger who knows exactly what characteristics experts expect to find in the work of a particular painter.", "Most attempts at art forgery are ultimately unsuccessful because the forger has not mastered the necessary techniques.", "The criteria by which aesthetic excellence is judged change significantly from one century to another and from one culture to another." ]
1
Which one of the following, if true, would most strengthen Lessing's contention that a painting can display aesthetic excellence without possessing an equally high degree of artistic value?
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_14
[ "Are animals capable of deliberately prevaricating in order to achieve specific goals?", "Are the communications of animals characterized by conscious intention?", "What kinds of stimuli are most likely to elicit animal vocalizations?", "Are the communication systems of nonhuman primates qualitatively different from those of all other animals?", "Is there a scientific consensus about the differences between animal communication systems and human language?" ]
1
Both passages are primarily concerned with addressing which one of the following questions?
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_15
[ "describe an interpretation of animal communication that the author believes rests on a logical error", "suggest by illustration that there is conscious intention underlying the communicative signs employed by certain animals", "present an argument in support of the view that animal communication systems are spontaneous and creative", "furnish specific evidence against the theory that most animal communication is merely a conditioned reflex", "point to a noted authority on animal communication whose views the author regards with respect" ]
0
In discussing the philosopher Maritain, the author of passage B seeks primarily to
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_16
[ "They fail to recognize that humans often communicate without any clear idea of their listeners' mental states.", "Most of them lack the credentials needed to assess the relevant experimental evidence correctly.", "They ignore well-known evidence that animals do in fact practice deception.", "They make assumptions about matters that should be determined empirically.", "They falsely believe that all communication systems can be explained in terms of their evolutionary benefits." ]
3
The author of passage B would be most likely to agree with which one of the following statements regarding researchers who subscribe to the position articulated in passage A?
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_17
[ "One function of language is to influence the behavior of others by changing what they think.", "Animal vocalizations may have evolved because they have the potential to alter listeners' behavior to the signaler's benefit.", "It is possible that chimpanzees may have the capacity to attribute mental states to others.", "There is no evidence that the male Physalaemus frog calls because he knows that his calls will affect the knowledge of other frogs.", "Macaques give alarm calls when predators approach and coo calls upon finding food." ]
3
Which one of the following assertions from passage A provides support for the view attributed to Maritain in passage B (lines 50–52)?
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_18
[ "the extent to which communication among humans involves the ability to perceive the mental states of others", "the importance of determining to what extent animal communication systems differ from human language", "whether human language and animal communication differ from one another qualitatively or merely in a matter of degree", "whether chimpanzees' vocalizations suggest that they may possess the capacity to attribute mental states to others", "whether animals' vocalizations evolved to alter the behavior of other animals in a way that benefits the signaler" ]
2
The authors would be most likely to disagree over
Passage A One function of language is to influence others' behavior by changing what they know, believe, or desire. For humans engaged in conversation, the perception of another's mental state is perhaps the most common vocalization stimulus. While animal vocalizations may have evolved because they can potentially alter listeners' behavior to the signaler's benefit, such communication is—in contrast to human language—inadvertent, because most animals, with the possible exception of chimpanzees, cannot attribute mental states to others. The male Physalaemus frog calls because calling causes females to approach and other males to retreat, but there is no evidence that he does so because he attributes knowledge or desire to other frogs, or because he knows his calls will affect their knowledge and that this knowledge will, in turn, affect their behavior. Research also suggests that, in marked contrast to humans, nonhuman primates do not produce vocalizations in response to perception of another's need for information. Macaques, for example, give alarm calls when predators approach and coo calls upon finding food, yet experiments reveal no evidence that individuals were more likely to call about these events when they were aware of them but their offspring were clearly ignorant; similarly, chimpanzees do not appear to adjust their calling to inform ignorant individuals of their own location or that of food. Many animal vocalizations whose production initially seems goal-directed are not as purposeful as they first appear. Passage B Many scientists distinguish animal communication systems from human language on the grounds that the former are rigid responses to stimuli, whereas human language is spontaneous and creative. In this connection, it is commonly stated that no animal can use its communication system to lie. Obviously, a lie requires intention to deceive: to judge whether a particular instance of animal communication is truly prevarication requires knowledge of the animal's intentions. Language philosopher H. P. Grice explains that for an individual to mean something by uttering x, the individual must intend, in expressing x, to induce an audience to believe something and must also intend the utterance to be recognized as so intended. But conscious intention is a category of mental experience widely believed to be uniquely human. Philosopher Jacques Maritain's discussion of the honeybee's elaborate "waggle-dance" exemplifies this view. Although bees returning to the hive communicate to other bees the distance and direction of food sources, such communication is, Maritain asserts, merely a conditioned reflex: animals may use communicative signs but lack conscious intention regarding their use. But these arguments are circular: conscious intention is ruled out a priori and then its absence taken as evidence that animal communication is fundamentally different from human language. In fact, the narrowing of the perceived gap between animal communication and human language revealed by recent research with chimpanzees and other animals calls into question not only the assumption that the difference between animal and human communication is qualitative rather than merely quantitative, but also the accompanying assumption that animals respond mechanically to stimuli, whereas humans speak with conscious understanding and intent.
201010_1-RC_3_19
[ "optimistic regarding the ability of science to answer certain fundamental questions", "disapproving of the approach taken by others writing on the same general topic", "open-minded in its willingness to accept the validity of apparently conflicting positions", "supportive of ongoing research related to the question at hand", "circumspect in its refusal to commit itself to any positions with respect to still-unsettled research questions" ]
1
Passage B differs from passage A in that passage B is more
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_20
[ "Historians are now recognizing that the major challenge faced by African Americans in the late nineteenth and early twentieth centuries was the struggle for citizenship.", "Early African American historians who practiced a transnational approach to history were primarily interested in advancing an emigrationist project.", "U.S. historiography in the late nineteenth and early twentieth centuries was characterized by a conflict between African American historians who viewed history from a transnational perspective and mainstream historians who took a nationalist perspective.", "The transnational perspective of early African American historians countered mainstream nationalist historiography, but it was arguably nationalist itself to the extent that it posited a culturally unified diasporic community.", "Mainstream U.S. historians in the late nineteenth and early twentieth centuries could no longer justify their nationalist approach to history once they were confronted with the transnational perspective taken by African American historians." ]
3
Which one of the following most accurately expresses the main idea of the passage?
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_21
[ "correcting a misconception about", "determining the sequence of events in", "investigating the implications of", "rewarding the promoters of", "shaping a conception of" ]
4
Which one of the following phrases most accurately conveys the sense of the word "reconstructing" as it is used in line 47?
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_22
[ "Emigrationist sentiment would not have been as strong among African Americans in the late nineteenth century had the promise of U.S. citizenship been fully realized for African Americans at that time.", "Scholars writing the history of diasporic communities generally do not discuss the forces that initially caused the scattering of the members of those communities.", "Most historians of the late nineteenth and early twentieth centuries endeavored to make the histories of the nations about which they wrote seem more glorious than they actually were.", "To be properly considered nationalist, a historical work must ignore the ways in which one nation's foreign policy decisions affected other nations.", "A considerable number of early African American historians embraced nationalism and the inevitability of the dominance of the nation-state." ]
0
Which one of the following is most strongly supported by the passage?
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_23
[ "investigated the extent to which European and U.S. nationalist mythologies contradicted one another", "defined the national characters of the United States and several European nations by focusing on their treatment of minority populations rather than on their territorial ambitions", "recounted the attempts by the United States to gain control over new territories during the late nineteenth and early twentieth centuries", "considered the impact of emigrationist sentiment among African Americans on U.S. foreign policy in Africa during the late nineteenth century", "examined the extent to which African American culture at the turn of the century incorporated traditions that were common to a number of African cultures" ]
4
As it is described in the passage, the transnational approach employed by African American historians working in the late nineteenth and early twentieth centuries would be best exemplified by a historical study that
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_24
[ "Which African nations did early African American historians research in writing their histories of the African diaspora?", "What were some of the African languages spoken by the ancestors of the members of the African diasporic community who were living in the United States in the late nineteenth century?", "Over which territories abroad did the United States attempt to extend its political power in the latter part of the nineteenth century?", "Are there textual ambiguities in the Fourteenth Amendment that spurred the conflict over U.S. citizenship for African Americans?", "In what ways did African American leaders respond to the question of citizenship for African Americans in the latter part of the nineteenth century?" ]
4
The passage provides information sufficient to answer which one of the following questions?
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_25
[ "Members of a particular diasporic community have a common country of origin.", "Territorial sovereignty is not a prerequisite for the project of nation building.", "Early African American historians who rejected nationalist historiography declined to engage in historical myth-making of any kind.", "The most prominent African American historians in the late nineteenth and early twentieth centuries advocated emigration for African Americans.", "Historians who employed a nationalist approach focused on entirely different events from those studied and written about by early African American historians." ]
1
The author of the passage would be most likely to agree with which one of the following statements?
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_26
[ "explain why early African American historians felt compelled to approach historiography in the way that they did", "show that governmental actions such as constitutional amendments do not always have the desired effect", "support the contention that African American intellectuals in the late nineteenth century were critical of U.S. imperialism", "establish that some African American political leaders in the late nineteenth century advocated emigration as an alternative to fighting for the benefits of U.S. citizenship", "argue that the definition of citizenship contained in the Fourteenth Amendment to the U.S. Constitution is too limited" ]
0
The main purpose of the second paragraph of the passage is to
In contrast to the mainstream of U.S. historiography during the late nineteenth and early twentieth centuries, African American historians of the period, such as George Washington Williams and W. E. B. DuBois, adopted a transnational perspective. This was true for several reasons, not the least of which was the necessity of doing so if certain aspects of the history of African Americans in the United States were to be treated honestly. First, there was the problem of citizenship. Even after the adoption in 1868 of the Fourteenth Amendment to the U.S. Constitution, which defined citizenship, the question of citizenship for African Americans had not been genuinely resolved. Because of this, emigrationist sentiment was a central issue in black political discourse, and both issues were critical topics for investigation. The implications for historical scholarship and national identity were enormous. While some black leaders insisted on their right to U.S. citizenship, others called on black people to emigrate and find a homeland of their own. Most African Americans were certainly not willing to relinquish their claims to the benefits of U.S. citizenship, but many had reached a point of profound pessimism and had begun to question their allegiance to the United States. Mainstream U.S. historiography was firmly rooted in a nationalist approach during this period; the glorification of the nation and a focus on the nation-state as a historical force were dominant.The expanding spheres of influence of Europe and the United States prompted the creation of new genealogies of nations, new myths about the inevitability of nations, their "temperaments," their destinies. African American intellectuals who confronted the nationalist approach to historiography were troubled by its implications. Some argued that imperialism was a natural outgrowth of nationalism and its view that a state's strength is measured by the extension of its political power over colonial territory; the scramble for colonial empires was a distinct aspect of nationalism in the latter part of the nineteenth century. Yet, for all their distrust of U.S. nationalism, most early black historians were themselves engaged in a sort of nation building. Deliberately or not, they contributed to the formation of a collective identity, reconstructing a glorious African past for the purposes of overturning degrading representations of blackness and establishing a firm cultural basis for a shared identity. Thus, one might argue that black historians' internationalism was a manifestation of a kind of nationalism that posits a diasporic community, which, while lacking a sovereign territory or official language, possesses a single culture, however mythical, with singular historical roots. Many members of this diaspora saw themselves as an oppressed "nation" without a homeland, or they imagined Africa as home. Hence, these historians understood their task to be the writing of the history of a people scattered by force and circumstance, a history that began in Africa.
201010_1-RC_4_27
[ "An elected official writes a memo suggesting that because a particular course of action has been successful in the past, the government should continue to pursue that course of action.", "A biographer of a famous novelist argues that the precocity apparent in certain of the novelist's early achievements confirms that her success was attributable to innate talent.", "A doctor maintains that because a certain medication was developed expressly for the treatment of an illness, it is the best treatment for that illness.", "A newspaper runs a series of articles in order to inform the public about the environmentally hazardous practices of a large corporation.", "A scientist gets the same result from an experiment several times and therefore concludes that its chemical reactions always proceed in the observed fashion." ]
1
As it is presented in the passage, the approach to history taken by mainstream U.S. historians of the late nineteenth and early twentieth centuries is most similar to the approach exemplified in which one of the following?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_1
[ "Lichenometry is a new method for dating past earthquakes that has advantages over radiocarbon dating.", "Despite its limitations, lichenometry has been proven to be more accurate than any other method of discerning the dates of past earthquakes.", "Most seismologists today have rejected radiocarbon dating and are embracing lichenometry as the most reliable method for studying past earthquakes.", "Two geologists have revolutionized the study of past earthquakes by developing lichenometry, an easily applied method of earthquake detection and dating.", "Radiocarbon dating, an unreliable test used in dating past earthquakes, can finally be abandoned now that lichenometry has been developed." ]
0
Which one of the following most accurately expresses the main idea of the passage?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_2
[ "How do scientists measure lichen growth rates under the varying conditions that lichens may encounter?", "How do scientists determine the intensity of the radiation striking Earth's upper atmosphere?", "What are some of the conditions that encourage lichens to grow at a more rapid rate than usual?", "What is the approximate date of the earliest earthquake that lichenometry has been used to identify?", "What are some applications of the techniques involved in radiocarbon dating other than their use in studying past earthquakes?" ]
2
The passage provides information that most helps to answer which one of the following questions?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_3
[ "to emphasize the rapidity with which lichen colonies can establish themselves on newly exposed rock surfaces", "to offer an example of a lichen species with one of the slowest known rates of growth", "to present additional evidence supporting the claim that environmental conditions can alter lichens' rate of growth", "to explain why lichenometry works best for dating earthquakes that occurred in the last 500 years", "to provide a sense of the sort of timescale on which lichen growth occurs" ]
4
What is the author's primary purpose in referring to the rate of growth of a North American lichen species (lines 29–30)?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_4
[ "Lichenometry is less accurate than radiocarbon dating in predicting the likelihood and location of future earthquakes.", "Radiocarbon dating is unlikely to be helpful in dating past earthquakes that have no identifiable fault lines associated with them.", "Radiocarbon dating and lichenometry are currently the only viable methods of detecting and dating past earthquakes.", "Radiocarbon dating is more accurate than lichenometry in dating earthquakes that occurred approximately 400 years ago.", "The usefulness of lichenometry for dating earthquakes is limited to geographic regions where factors that disturb or accelerate lichen growth generally do not occur." ]
1
Which one of the following statements is most strongly supported by the passage?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_5
[ "a well-known procedure that will then be examined on a step-by-step basis", "an established procedure to which a new procedure will then be compared", "an outdated procedure that will then be shown to be nonetheless useful in some situations", "a traditional procedure that will then be contrasted with other traditional procedures", "a popular procedure that will then be shown to have resulted in erroneous conclusions about a phenomenon" ]
1
The primary purpose of the first paragraph in relation to the rest of the passage is to describe
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_6
[ "While lichenometry is less accurate when it is used to date earthquakes that occurred more than 500 years ago, it is still more accurate than other methods for dating such earthquakes.", "There is no reliable method for determining the intensity of the radiation now hitting Earth's upper atmosphere.", "Lichens are able to grow only on the types of rocks that are common in mountainous regions.", "The mountain ranges that produce the kinds of rockfalls studied in lichenometry are also subject to more frequent snowfalls and avalanches than other mountain ranges are.", "The extent to which conditions like shade and wind have affected the growth of existing lichen colonies can be determined." ]
4
It can be inferred that the statements made by Bull and Brandon and reported in lines 50–58 rely on which one of the following assumptions?
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_7
[ "the multiplicity of the types of organic matter that require analysis", "the variable amount of organic materials caught in shifted sediments", "the fact that fault lines related to past earthquakes are not always visible", "the fluctuations in the amount of the carbon 14 isotope in the environment over time", "the possibility that radiation has not always struck the upper atmosphere" ]
3
The passage indicates that using radiocarbon dating to date past earthquakes may be unreliable due to
To study centuries-old earthquakes and the geologic faults that caused them, seismologists usually dig trenches along visible fault lines, looking for sediments that show evidence of having shifted. Using radiocarbon dating, they measure the quantity of the radioactive isotope carbon 14 present in wood or other organic material trapped in the sediments when they shifted. Since carbon 14 occurs naturally in organic materials and decays at a constant rate, the age of organic materials can be reconstructed from the amount of the isotope remaining in them. These data can show the location and frequency of past earthquakes and provide hints about the likelihood and location of future earthquakes. Geologists William Bull and Mark Brandon have recently developed a new method, called lichenometry, for detecting and dating past earthquakes. Bull and Brandon developed the method based on the fact that large earthquakes generate numerous simultaneous rockfalls in mountain ranges that are sensitive to seismic shaking. Instead of dating fault-line sediments, lichenometry involves measuring the size of lichens growing on the rocks exposed by these rockfalls. Lichens—symbiotic organisms consisting of a fungus and an alga—quickly colonize newly exposed rock surfaces in the wake of rockfalls, and once established they grow radially, flat against the rocks, at a slow but constant rate for as long as 1,000 years if left undisturbed. One species of North American lichen, for example, spreads outward by about 9.5 millimeters each century. Hence, the diameter of the largest lichen on a boulder provides direct evidence of when the boulder was dislodged and repositioned. If many rockfalls over a large geographic area occurred simultaneously, that pattern would imply that there had been a strong earthquake. The location of the earthquake's epicenter can then be determined by mapping these rockfalls, since they decrease in abundance as the distance from the epicenter increases. Lichenometry has distinct advantages over radiocarbon dating. Radiocarbon dating is accurate only to within plus or minus 40 years, because the amount of the carbon 14 isotope varies naturally in the environment depending on the intensity of the radiation striking Earth's upper atmosphere. Additionally, this intensity has fluctuated greatly during the past 300 years, causing many radiocarbon datings of events during this period to be of little value. Lichenometry, Bull and Brandon claim, can accurately date an earthquake to within ten years. They note, however, that using lichenometry requires careful site selection and accurate calibration of lichen growth rates, adding that the method is best used for earthquakes that occurred within the last 500 years. Sites must be selected to minimize the influence of snow avalanches and other disturbances that would affect normal lichen growth, and conditions like shade and wind that promote faster lichen growth must be factored in.
201012_1-RC_1_8
[ "identifying the number of times a particular river has flooded in the past 1,000 years", "identifying the age of a fossilized skeleton of a mammal that lived many thousands of years ago", "identifying the age of an ancient beach now underwater approximately 30 kilometers off the present shore", "identifying the rate, in kilometers per century, at which a glacier has been receding up a mountain valley", "identifying local trends in annual rainfall rates in a particular valley over the past five centuries" ]
3
Given the information in the passage, to which one of the following would lichenometry likely be most applicable?
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_9
[ "schematic drawings accompanying an engineer's oral presentation", "road maps used by people unfamiliar with an area so that they will not have to get verbal instructions from strangers", "children's drawings that psychologists use to detect wishes and anxieties not apparent in the children's behavior", "a reproduction of a famous painting in an art history textbook", "an artist's preliminary sketches for a painting" ]
0
Which one of the following is most analogous to the role that, according to the author, custom-made medical illustrations play in personal injury cases?
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_10
[ "They tend to rely less on the use of color than do custom-made medical illustrations.", "They are inadmissible in a courtroom unless a medical expert is present to testify to their accuracy.", "They are in many cases drawn by the same individuals who draw custom-made medical illustrations for courtroom use.", "They are believed by most lawyers to be less prone than custom-made medical illustrations to misrepresent the nature of a personal injury.", "In many cases they are more apt to confuse jurors than are custom-made medical illustrations." ]
4
Based on the passage, which one of the following is the author most likely to believe about illustrations in medical textbooks?
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_11
[ "decide which custom-made medical illustrations should be admissible", "temper the impact of the illustrations on judges and jurors who are not medical professionals", "make medical illustrations understandable to judges and jurors", "provide opinions to attorneys as to which illustrations, if any, would be useful", "provide their opinions as to the accuracy of the illustrations" ]
4
The passage states that a role of medical experts in relation to custom-made medical illustrations in the courtroom is to
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_12
[ "custom-made medical illustrations accurately represent human anatomy, whereas medical textbook illustrations do not", "medical textbook illustrations employ color freely, whereas custom-made medical illustrations must avoid color", "medical textbook illustrations are objective, while custom-made medical illustrations are subjective", "medical textbook illustrations are very detailed, whereas custom-made medical illustrations include only details that are relevant to the case", "medical textbook illustrations are readily comprehended by nonmedical audiences, whereas custom-made medical illustrations are not" ]
3
According to the passage, one of the ways that medical textbook illustrations differ from custom-made medical illustrations is that
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_13
[ "appreciation of the difficulty involved in explaining medical data to judges and jurors together with skepticism concerning the effectiveness of such testimony", "admiration for the experts' technical knowledge coupled with disdain for the communications skills of medical professionals", "acceptance of the accuracy of such testimony accompanied with awareness of the limitations of a presentation that is entirely verbal", "respect for the medical profession tempered by apprehension concerning the tendency of medical professionals to try to overwhelm judges and jurors with technical details", "respect for expert witnesses combined with intolerance of the use of technical terminology" ]
2
The author's attitude toward the testimony of medical experts in personal injury cases is most accurately described as
While courts have long allowed custom-made medical illustrations depicting personal injury to be presented as evidence in legal cases, the issue of whether they have a legitimate place in the courtroom is surrounded by ongoing debate and misinformation. Some opponents of their general use argue that while illustrations are sometimes invaluable in presenting the physical details of a personal injury, in all cases except those involving the most unusual injuries, illustrations from medical textbooks can be adequate. Most injuries, such as fractures and whiplash, they say, are rather generic in nature—certain commonly encountered forces act on particular areas of the body in standard ways—so they can be represented by generic illustrations. Another line of complaint stems from the belief that custom-made illustrations often misrepresent the facts in order to comply with the partisan interests of litigants. Even some lawyers appear to share a version of this view, believing that such illustrations can be used to bolster a weak case. Illustrators are sometimes approached by lawyers who, unable to find medical experts to support their clients' claims, think that they can replace expert testimony with such deceptive professional illustrations. But this is mistaken. Even if an unscrupulous illustrator could be found, such illustrations would be inadmissible as evidence in the courtroom unless a medical expert were present to testify to their accuracy. It has also been maintained that custom-made illustrations may subtly distort the issues through the use of emphasis, coloration, and other means, even if they are technically accurate. But professional medical illustrators strive for objective accuracy and avoid devices that have inflammatory potential, sometimes even eschewing the use of color. Unlike illustrations in medical textbooks, which are designed to include the extensive detail required by medical students, custom-made medical illustrations are designed to include only the information that is relevant for those deciding a case. The end user is typically a jury or a judge, for whose benefit the depiction is reduced to the details that are crucial to determining the legally relevant facts. The more complex details often found in textbooks can be deleted so as not to confuse the issue. For example, illustrations of such things as veins and arteries would only get in the way when an illustration is supposed to be used to explain the nature of a bone fracture. Custom-made medical illustrations, which are based on a plaintiff's X rays, computerized tomography scans, and medical records and reports, are especially valuable in that they provide visual representations of data whose verbal description would be very complex. Expert testimony by medical professionals often relies heavily on the use of technical terminology, which those who are not specially trained in the field find difficult to translate mentally into visual imagery. Since, for most people, adequate understanding of physical data depends on thinking at least partly in visual terms, the clearly presented visual stimulation provided by custom-made illustrations can be quite instructive.
201012_1-RC_2_14
[ "argue for a greater use of custom-made medical illustrations in court cases involving personal injury", "reply to a variant of the objection to custom-made medical illustrations raised in the second paragraph", "argue against the position that illustrations from medical textbooks are well suited for use in the courtroom", "discuss in greater detail why custom-made medical illustrations are controversial", "describe the differences between custom-made medical illustrations and illustrations from medical textbooks" ]
1
The author's primary purpose in the third paragraph is to
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_15
[ "evidence of the development of agriculture in the archaeological record", "the impact of agriculture on the overall health of human populations", "the effects of carbohydrate-rich foods on caries formation in strictly agricultural societies", "the archaeological evidence regarding when the first agricultural society arose", "the extent to which pre-agricultural populations were able to obtain carbohydrate-rich foods" ]
0
Both passages are primarily concerned with examining which one of the following topics?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_16
[ "While the Ban Chiang populations consumed several highly cariogenic foods, the populations discussed in the last paragraph of passage A did not.", "While the Ban Chiang populations ate cultivated foods, the populations discussed in the last paragraph of passage A did not.", "While the Ban Chiang populations consumed a diet consisting primarily of carbohydrates, the populations discussed in the last paragraph of passage A did not.", "While the Ban Chiang populations exhibited very high levels of tooth wear, the populations discussed in the last paragraph of passage A did not.", "While the Ban Chiang populations ate certain highly processed foods, the populations discussed in the last paragraph of passage A did not." ]
1
Which one of the following distinguishes the Ban Chiang populations discussed in passage B from the populations discussed in the last paragraph of passage A?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_17
[ "They can either limit or promote caries formation, depending on their prevalence in the diet.", "They are typically consumed in greater quantities as a population adopts agriculture.", "They have a negative effect on overall health since they have no nutritional value.", "They contribute to the formation of fissures in tooth surfaces.", "They increase the stickiness of carbohydrate-rich foods." ]
0
Passage B most strongly supports which one of the following statements about fiber and grit in a diet?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_18
[ "the effect of consuming highly processed foods on caries formation", "the relatively low incidence of caries among nonagricultural people", "the effect of fiber and grit in the diet on caries formation", "the effect of the consumption of wild foods on tooth wear", "the effect of agricultural intensification on overall human health" ]
1
Which one of the following is mentioned in both passages as evidence tending to support the prevailing view regarding the relationship between dental caries and carbohydrate consumption?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_19
[ "The incidence of dental caries increases predictably in populations over time.", "Dental caries is often difficult to detect in teeth recovered from archaeological sites.", "Dental caries tends to be more prevalent in populations with a hunter-gatherer diet than in populations with an agricultural diet.", "The frequency of dental caries in a population does not necessarily correspond directly to the population's degree of dependence on agriculture.", "The formation of dental caries tends to be more strongly linked to tooth wear than to the consumption of a particular kind of food." ]
3
It is most likely that both authors would agree with which one of the following statements about dental caries?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_20
[ "Varieties that are cultivated have a greater tendency to cause caries than varieties that grow wild.", "Those that require substantial processing do not play a role in hunter-gatherer diets.", "Some of them naturally have a greater tendency than others to cause caries.", "Some of them reduce caries formation because their relatively high fiber content increases tooth wear.", "The cariogenic potential of a given variety increases if it is cultivated rather than gathered in the wild." ]
2
Each passage suggests which one of the following about carbohydrate-rich foods?
Passage A Because dental caries (decay) is strongly linked to the sticky, carbohydrate-rich staples of agricultural diets, prehistoric human teeth can provide clues about when a population made the transition from a hunter-gatherer diet to an agricultural one. Caries formation is influenced by several factors, including tooth structure, bacteria in the mouth, and diet. In particular, caries formation is affected by carbohydrates' texture and composition, since carbohydrates more readily stick to teeth. Many researchers have demonstrated the link between carbohydrate consumption and caries. In North America, Leigh studied caries in archaeologically derived teeth, noting that caries rates differed between indigenous populations that primarily consumed meat (a Sioux sample showed almost no caries) and those heavily dependent on cultivated maize (a Zuni sample had 75 percent carious teeth). Leigh's findings have been frequently confirmed by other researchers, who have shown that, in general, the greater a population's dependence on agriculture is, the higher its rate of caries formation will be. Under some circumstances, however, nonagricultural populations may exhibit relatively high caries rates. For example, early nonagricultural populations in western North America who consumed large amounts of highly processed stone-ground flour made from gathered acorns show relatively high caries frequencies. And wild plants collected by the Hopi included several species with high cariogenic potential, notably pinyon nuts and wild tubers. Passage B Archaeologists recovered human skeletal remains interred over a 2,000-year period in prehistoric Ban Chiang, Thailand. The site's early inhabitants appear to have had a hunter-gatherer-cultivator economy. Evidence indicates that, over time, the population became increasingly dependent on agriculture. Research suggests that agricultural intensification results in declining human health, including dental health. Studies show that dental caries is uncommon in pre-agricultural populations. Increased caries frequency may result from increased consumption of starchy-sticky foodstuffs or from alterations in tooth wear. The wearing down of tooth crown surfaces reduces caries formation by removing fissures that can trap food particles. A reduction of fiber or grit in a diet may diminish tooth wear, thus increasing caries frequency. However, severe wear that exposes a tooth's pulp cavity may also result in caries. The diet of Ban Chiang's inhabitants included some cultivated rice and yams from the beginning of the period represented by the recovered remains. These were part of a varied diet that also included wild plant and animal foods. Since both rice and yams are carbohydrates, increased reliance on either or both should theoretically result in increased caries frequency. Yet comparisons of caries frequency in the Early and Late Ban Chiang Groups indicate that overall caries frequency is slightly greater in the Early Group. Tooth wear patterns do not indicate tooth wear changes between Early and Late Groups that would explain this unexpected finding. It is more likely that, although dependence on agriculture increased, the diet in the Late period remained varied enough that no single food dominated. Furthermore, there may have been a shift from sweeter carbohydrates (yams) toward rice, a less cariogenic carbohydrate.
201012_1-RC_3_21
[ "The evidence confirms the generalization.", "The evidence tends to support the generalization.", "The evidence is irrelevant to the generalization.", "The evidence does not conform to the generalization.", "The evidence disproves the generalization." ]
3
The evidence from Ban Chiang discussed in passage B relates to the generalization reported in the second paragraph of passage A (lines 20–22) in which one of the following ways?
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_22
[ "Did any men write domestic novels in the 1850s?", "Were any widely read domestic novels written after the 1860s?", "How did migration to urban areas affect the development of domestic fiction in the 1850s?", "What is an effect that Jewett's conception of literary art had on her fiction?", "With what region of the United States were at least some of Jewett's writings concerned?" ]
3
The passage most helps to answer which one of the following questions?
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_23
[ "advocating a position that is essentially correct even though some powerful arguments can be made against it", "making a true claim about Jewett, but for the wrong reasons", "making a claim that is based on some reasonable evidence and is initially plausible but ultimately mistaken", "questionable, because it relies on a currently dominant literary aesthetic that takes too narrow a view of the proper goals of fiction", "based on speculation for which there is no reasonable support, and therefore worthy of dismissal" ]
2
It can be inferred from the passage that the author would be most likely to view the "recent criticism" mentioned in line 1 as
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_24
[ "Domestic fiction was part of an ongoing tradition stretching back into the past.", "Fiction was not treated as clearly distinct from other categories of writing.", "Domestic fiction was often published in serial form.", "Fiction is constantly evolving.", "Domestic fiction promoted the cohesiveness and hence the continuity of society." ]
1
In saying that domestic fiction was based on a conception of fiction as part of a "continuum" (line 30), the author most likely means which one of the following?
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_25
[ "It proposes and defends a radical redefinition of several historical categories of literary style.", "It proposes an evaluation of a particular style of writing, of which one writer's work is cited as a paradigmatic case.", "It argues for a reappraisal of a set of long-held assumptions about the historical connections among a group of writers.", "It weighs the merits of two opposing conceptions of the nature of fiction.", "It rejects a way of classifying a particular writer's work and defends an alternative view." ]
4
Which one of the following most accurately states the primary function of the passage?
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_26
[ "The author considers and rejects a number of possible explanations for a phenomenon, concluding that any attempt at explanation does violence to the unity of the phenomenon.", "The author shows that two explanatory hypotheses are incompatible with each other and gives reasons for preferring one of them.", "The author describes several explanatory hypotheses and argues that they are not really distinct from one another.", "The author proposes two versions of a classificatory hypothesis, indicates the need for some such hypothesis, and then sets out a counterargument in preparation for rejecting that counterargument in the following paragraph.", "The author mentions a number of explanatory hypotheses, gives a mildly favorable comment on them, and then advocates and elaborates another explanation that the author considers to be more fundamental." ]
4
Which one of the following most accurately represents the structure of the second paragraph?
Recent criticism has sought to align Sarah Orne Jewett, a notable writer of regional fiction in the nineteenth-century United States, with the domestic novelists of the previous generation. Her work does resemble the domestic novels of the 1850s in its focus on women, their domestic occupations, and their social interactions, with men relegated to the periphery. But it also differs markedly from these antecedents. The world depicted in the latter revolves around children. Young children play prominent roles in the domestic novels and the work of child rearing—the struggle to instill a mother's values in a child's character—is their chief source of drama. By contrast, children and child rearing are almost entirely absent from the world of Jewett's fiction. Even more strikingly, while the literary world of the earlier domestic novelists is insistently religious, grounded in the structures of Protestant religious belief, to turn from these writers to Jewett is to encounter an almost wholly secular world. To the extent that these differences do not merely reflect the personal preferences of the authors, we might attribute them to such historical transformations as the migration of the rural young to cities or the increasing secularization of society. But while such factors may help to explain the differences, it can be argued that these differences ultimately reflect different conceptions of the nature and purpose of fiction. The domestic novel of the mid-nineteenth century is based on a conception of fiction as part of a continuum that also included writings devoted to piety and domestic instruction, bound together by a common goal of promoting domestic morality and religious belief. It was not uncommon for the same multipurpose book to be indistinguishably a novel, a child-rearing manual, and a tract on Christian duty. The more didactic aims are absent from Jewett's writing, which rather embodies the late nineteenth-century "high-cultural" conception of fiction as an autonomous sphere with value in and of itself. This high-cultural aesthetic was one among several conceptions of fiction operative in the United States in the 1850s and 1860s, but it became the dominant one later in the nineteenth century and remained so for most of the twentieth. On this conception, fiction came to be seen as pure art: a work was to be viewed in isolation and valued for the formal arrangement of its elements rather than for its larger social connections or the promotion of extraliterary goods. Thus, unlike the domestic novelists, Jewett intended her works not as a means to an end but as an end in themselves. This fundamental difference should be given more weight in assessing their affinities than any superficial similarity in subject matter.
201012_1-RC_4_27
[ "Why was Jewett unwilling to feature children and religious themes as prominently in her works as the domestic novelists featured them in theirs?", "Why did both Jewett and the domestic novelists focus primarily on rural as opposed to urban concerns?", "Why was Jewett not constrained to feature children and religion as prominently in her works as domestic novelists were?", "Why did both Jewett and the domestic novelists focus predominantly on women and their concerns?", "Why was Jewett unable to feature children or religion as prominently in her works as the domestic novelists featured them in theirs?" ]
2
The differing conceptions of fiction held by Jewett and the domestic novelists can most reasonably be taken as providing an answer to which one of the following questions?
African American painter Sam Gilliam (b. 1933) is internationally recognized as one of the foremost painters associated with the Washington Color School, a group of Color Field style painters practicing in Washington, D.C. during the 1950s and 1960s.The Color Field style was an important development in abstract art that emerged after the rise of abstract expressionism. It evolved from complex and minimally representational abstractions in the 1950s to totally nonrepresentational, simplified works of bright colors in the 1960s.Gilliam's participation in the Color Field movement was motivated in part by his reaction to the art of his African American contemporaries, much of which was strictly representational and was intended to convey explicit political statements. Gilliam found their approach to be aesthetically conservative: the message was unmistakable, he felt, and there was little room for the expression of subtlety or ambiguity or, more importantly, the exploration of new artistic territory through experimentation and innovation. For example, one of his contemporaries worked with collage, assembling disparate bits of images from popular magazines into loosely structured compositions that depicted the period's political issues—themes such as urban life, the rural South, and African American music. Though such art was quite popular with the general public, Gilliam was impatient with its straightforward, literal approach to representation. In its place he sought an artistic form that was more expressive than a painted figure or a political slogan, more evocative of the complexity of human experience in general, and of the African American experience in particular. In this he represented a view that was then rare among African American artists.Gilliam's highly experimental paintings epitomized his refusal to conform to the public's expectation that African American artists produce explicitly political art. His early experiments included pouring paint onto stained canvases and folding canvases over onto themselves. Then around 1965 Gilliam became the first painter to introduce the idea of the unsupported canvas. Partially inspired by the sight of neighbors hanging laundry on clotheslines, Gilliam began to drape huge pieces of loose canvas along floors and fold them up and down walls, even suspending them from ceilings, giving them a third dimension and therefore a sculptural quality. These efforts demonstrate a sensitivity to the texture of daily experience, as well as the ability to generate tension by juxtaposing conceptual opposites-such as surface and depth or chaos and control-to form a cohesive whole. In this way, Gilliam helped advance the notion that the deepest, hardest-to-capture emotions and tensions of being African American could not be represented directly, but were expressed more effectively through the creation of moods that would allow these emotions and tensions to be felt by all audiences.
201312_4-RC_1_1
[ "describing the motivation behind and nature of an artist's work", "describing the political themes that permeate an artist's work", "describing the evolution of an artist's style over a period of time", "demonstrating that a certain artist's views were rare among African American artists", "demonstrating that a certain artist was able to transcend his technical limitations" ]
0
In the passage, the author is primarily concerned with
African American painter Sam Gilliam (b. 1933) is internationally recognized as one of the foremost painters associated with the Washington Color School, a group of Color Field style painters practicing in Washington, D.C. during the 1950s and 1960s.The Color Field style was an important development in abstract art that emerged after the rise of abstract expressionism. It evolved from complex and minimally representational abstractions in the 1950s to totally nonrepresentational, simplified works of bright colors in the 1960s.Gilliam's participation in the Color Field movement was motivated in part by his reaction to the art of his African American contemporaries, much of which was strictly representational and was intended to convey explicit political statements. Gilliam found their approach to be aesthetically conservative: the message was unmistakable, he felt, and there was little room for the expression of subtlety or ambiguity or, more importantly, the exploration of new artistic territory through experimentation and innovation. For example, one of his contemporaries worked with collage, assembling disparate bits of images from popular magazines into loosely structured compositions that depicted the period's political issues—themes such as urban life, the rural South, and African American music. Though such art was quite popular with the general public, Gilliam was impatient with its straightforward, literal approach to representation. In its place he sought an artistic form that was more expressive than a painted figure or a political slogan, more evocative of the complexity of human experience in general, and of the African American experience in particular. In this he represented a view that was then rare among African American artists.Gilliam's highly experimental paintings epitomized his refusal to conform to the public's expectation that African American artists produce explicitly political art. His early experiments included pouring paint onto stained canvases and folding canvases over onto themselves. Then around 1965 Gilliam became the first painter to introduce the idea of the unsupported canvas. Partially inspired by the sight of neighbors hanging laundry on clotheslines, Gilliam began to drape huge pieces of loose canvas along floors and fold them up and down walls, even suspending them from ceilings, giving them a third dimension and therefore a sculptural quality. These efforts demonstrate a sensitivity to the texture of daily experience, as well as the ability to generate tension by juxtaposing conceptual opposites-such as surface and depth or chaos and control-to form a cohesive whole. In this way, Gilliam helped advance the notion that the deepest, hardest-to-capture emotions and tensions of being African American could not be represented directly, but were expressed more effectively through the creation of moods that would allow these emotions and tensions to be felt by all audiences.
201312_4-RC_1_2
[ "a brightly colored painting carefully portraying a man dressed in work clothes and holding a shovel in his hands", "a large, wrinkled canvas painted with soft, blended colors and overlaid with glued-on newspaper photographs depicting war scenes", "a painted abstract caricature of a group of jazz musicians waiting to perform", "a long unframed canvas painted with images of the sea and clouds and hung from a balcony to simulate the unfurling of sails", "a folded and crumpled canvas with many layers of colorful dripped and splashed paint interwoven with one anothe" ]
4
Which one of the following would come closest to exemplifying the characteristics of Gilliam's work as described in the passage?
African American painter Sam Gilliam (b. 1933) is internationally recognized as one of the foremost painters associated with the Washington Color School, a group of Color Field style painters practicing in Washington, D.C. during the 1950s and 1960s.The Color Field style was an important development in abstract art that emerged after the rise of abstract expressionism. It evolved from complex and minimally representational abstractions in the 1950s to totally nonrepresentational, simplified works of bright colors in the 1960s.Gilliam's participation in the Color Field movement was motivated in part by his reaction to the art of his African American contemporaries, much of which was strictly representational and was intended to convey explicit political statements. Gilliam found their approach to be aesthetically conservative: the message was unmistakable, he felt, and there was little room for the expression of subtlety or ambiguity or, more importantly, the exploration of new artistic territory through experimentation and innovation. For example, one of his contemporaries worked with collage, assembling disparate bits of images from popular magazines into loosely structured compositions that depicted the period's political issues—themes such as urban life, the rural South, and African American music. Though such art was quite popular with the general public, Gilliam was impatient with its straightforward, literal approach to representation. In its place he sought an artistic form that was more expressive than a painted figure or a political slogan, more evocative of the complexity of human experience in general, and of the African American experience in particular. In this he represented a view that was then rare among African American artists.Gilliam's highly experimental paintings epitomized his refusal to conform to the public's expectation that African American artists produce explicitly political art. His early experiments included pouring paint onto stained canvases and folding canvases over onto themselves. Then around 1965 Gilliam became the first painter to introduce the idea of the unsupported canvas. Partially inspired by the sight of neighbors hanging laundry on clotheslines, Gilliam began to drape huge pieces of loose canvas along floors and fold them up and down walls, even suspending them from ceilings, giving them a third dimension and therefore a sculptural quality. These efforts demonstrate a sensitivity to the texture of daily experience, as well as the ability to generate tension by juxtaposing conceptual opposites-such as surface and depth or chaos and control-to form a cohesive whole. In this way, Gilliam helped advance the notion that the deepest, hardest-to-capture emotions and tensions of being African American could not be represented directly, but were expressed more effectively through the creation of moods that would allow these emotions and tensions to be felt by all audiences.
201312_4-RC_1_3
[ "exemplify the style of art of the Washington Color School", "point out the cause of the animosity between representational artists and abstract artists", "establish that representational art was more popular with the general public than abstract art was", "illustrate the kind of art that Gilliam was reacting against", "show why Gilliam's art was primarily concerned with political issues" ]
3
The author mentions a collage artist in the second paragraph primarily to
African American painter Sam Gilliam (b. 1933) is internationally recognized as one of the foremost painters associated with the Washington Color School, a group of Color Field style painters practicing in Washington, D.C. during the 1950s and 1960s.The Color Field style was an important development in abstract art that emerged after the rise of abstract expressionism. It evolved from complex and minimally representational abstractions in the 1950s to totally nonrepresentational, simplified works of bright colors in the 1960s.Gilliam's participation in the Color Field movement was motivated in part by his reaction to the art of his African American contemporaries, much of which was strictly representational and was intended to convey explicit political statements. Gilliam found their approach to be aesthetically conservative: the message was unmistakable, he felt, and there was little room for the expression of subtlety or ambiguity or, more importantly, the exploration of new artistic territory through experimentation and innovation. For example, one of his contemporaries worked with collage, assembling disparate bits of images from popular magazines into loosely structured compositions that depicted the period's political issues—themes such as urban life, the rural South, and African American music. Though such art was quite popular with the general public, Gilliam was impatient with its straightforward, literal approach to representation. In its place he sought an artistic form that was more expressive than a painted figure or a political slogan, more evocative of the complexity of human experience in general, and of the African American experience in particular. In this he represented a view that was then rare among African American artists.Gilliam's highly experimental paintings epitomized his refusal to conform to the public's expectation that African American artists produce explicitly political art. His early experiments included pouring paint onto stained canvases and folding canvases over onto themselves. Then around 1965 Gilliam became the first painter to introduce the idea of the unsupported canvas. Partially inspired by the sight of neighbors hanging laundry on clotheslines, Gilliam began to drape huge pieces of loose canvas along floors and fold them up and down walls, even suspending them from ceilings, giving them a third dimension and therefore a sculptural quality. These efforts demonstrate a sensitivity to the texture of daily experience, as well as the ability to generate tension by juxtaposing conceptual opposites-such as surface and depth or chaos and control-to form a cohesive whole. In this way, Gilliam helped advance the notion that the deepest, hardest-to-capture emotions and tensions of being African American could not be represented directly, but were expressed more effectively through the creation of moods that would allow these emotions and tensions to be felt by all audiences.
201312_4-RC_1_4
[ "derisive condescension", "open dissatisfaction", "whimsical dismissal", "careful neutrality", "mild approval" ]
1
The passage most strongly suggests that Gilliam's attitude toward the strictly representational art of his contemporaries is which one of the following?