context
stringclasses
269 values
id_string
stringlengths
15
16
answers
sequencelengths
5
5
label
int64
0
4
question
stringlengths
34
417
Traditionally, members of a community such as a town or neighborhood share a common location and a sense of necessary interdependence that includes, for example, mutual respect and emotional support. But as modern societies grow more technological and sometimes more alienating, people tend to spend less time in the kinds of interactions that their communities require in order to thrive. Meanwhile, technology has made it possible for individuals to interact via personal computer with others who are geographically distant. Advocates claim that these computer conferences, in which large numbers of participants communicate by typing comments that are immediately read by other participants and responding immediately to those comments they read, function as communities that can substitute for traditional interactions with neighbors. What are the characteristics that advocates claim allow computer conferences to function as communities? For one, participants often share common interests or concerns; conferences are frequently organized around specific topics such as music or parenting. Second, because these conferences are conversations, participants have adopted certain conventions in recognition of the importance of respecting each others' sensibilities. Abbreviations are used to convey commonly expressed sentiments of courtesy such as "pardon me for cutting in" ( "pmfci" ) or "in my humble opinion" ( "imho" ). Because a humorous tone can be difficult to communicate in writing, participants will often end an intentionally humorous comment with a set of characters that, when looked at sideways, resembles a smiling or winking face. Typing messages entirely in capital letters is avoided, because its tendency to demand the attention of a reader's eye is considered the computer equivalent of shouting. These conventions, advocates claim, constitute a form of etiquette, and with this etiquette as a foundation, people often form genuine, trusting relationships, even offering advice and support during personal crises such as illness or the loss of a loved one. But while it is true that conferences can be both respectful and supportive, they nonetheless fall short of communities. For example, conferences discriminate along educational and economic lines because participation requires a basic knowledge of computers and the ability to afford access to conferences. Further, while advocates claim that a shared interest makes computer conferences similar to traditional communities—insofar as the shared interest is analogous to a traditional community's shared location—this analogy simply does not work. Conference participants are a self-selecting group; they are drawn together by their shared interest in the topic of the conference. Actual communities, on the other hand, are "nonintentional" : the people who inhabit towns or neighborhoods are thus more likely to exhibit genuine diversity—of age, career, or personal interests—than are conference participants. It might be easier to find common ground in a computer conference than in today's communities, but in so doing it would be unfortunate if conference participants cut themselves off further from valuable interactions in their own towns or neighborhoods.
200112_2-RC_1_6
[ "Participants in computer conferences are generally more accepting of diversity than is the population at large.", "Computer technology is rapidly becoming more affordable and accessible to people from a variety of backgrounds.", "Participants in computer conferences often apply the same degree of respect and support they receive from one another to interactions in their own actual communities.", "Participants in computer conferences often feel more comfortable interacting on the computer because they are free to interact without revealing their identities.", "The conventions used to facilitate communication in computer conferences are generally more successful than those used in actual communities." ]
1
Which one of the following, if true, would most weaken one of the author's arguments in the last paragraph?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_7
[ "Analyses of the scientific, theological, and legal writings of the Renaissance have proved to be more important to an understanding of the period than have studies of humanistic and literary works.", "The English works of such Renaissance writers as Shakespeare, Marlowe, and Sidney have been overemphasized at the expense of these writers' more intellectually challenging Latin works.", "Though traditionally recognized as the language of the educated classes of the Renaissance, Latin has until recently been studied primarily in connection with ancient Roman texts.", "Many Latin texts by English Renaissance writers, though analyzed in depth by literary critics and philologists, have been all but ignored by historians of science and theology.", "Many Latin texts by English Renaissance writers, though important to an analysis of the period, have been insufficiently understood for reasons related to academic specialization." ]
4
Which one of the following best states the main idea of the passage?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_8
[ "These scholars tend to lack training both in language and in intellectual history, and thus base their interpretations of Renaissance culture on works translated into English.", "These scholars tend to lack the combination of training in both language and intellectual history that is necessary for a proper study of important and neglected Latin texts.", "Specialists in such literary forms as poems and orations too frequently lack training in the Latin language that was written and studied during the Renaissance.", "Language specialists have surveyed in too great detail important works of law and medicine, and thus have not provided a coherent interpretation of early modern English culture.", "Scholars who analyze important Latin works by such writers as Marlowe, Shakespeare, and Sidney too often lack the historical knowledge of Latin necessary for a proper interpretation of early modern English culture." ]
1
The passage contains support for which one of the following statements concerning those scholars who analyze works written in Latin during the Renaissance?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_9
[ "Continental writers wrote in Latin more frequently than did English writers,and thus rendered some of the most important Continental works inaccessible to English readers.", "Continental writers, more intellectually advanced than their English counterparts, were on the whole responsible for familiarizing English audiences with Latin language and literature.", "English and Continental writers communicated their intellectual concerns, which were for the most part different, by way of works written in Latin.", "The intellectual ties between English and Continental writers were stronger than has been acknowledged by many scholars and were founded on a mutual knowledge of Latin.", "The intellectual ties between English and Continental writers have been overemphasized in modern scholarship due to a lack of dialogue between language specialists and intellectual historians." ]
3
Which one of the following statements concerning the relationship between English and Continental writers of the Renaissance era can be inferred from the passage?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_10
[ "nonfiction works are less well known than their imaginative works", "works have unfairly been credited with revolutionizing Western thought", "works have been treated as an autonomous and coherent whole", "works have traditionally been seen as representing the high culture of Renaissance England", "Latin writings have, according to Binns, been overlooked" ]
3
The author of the passage most likely cites Shakespeare, Marlowe, and Sidney in the first paragraph as examples of writers whose
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_11
[ "These writings have unfortunately been undervalued by Latin-language specialists because of their nonliterary subject matter.", "These writings, according to Latin-language specialists, had very little influence on the intellectual upheavals associated with the Renaissance.", "These writings, as analyzed by intellectual historians, have formed the basis of a superficially coherent reading of the intellectual culture that produced them.", "These writings have been compared unfavorably by intellectual historians with Continental works of the same period.", "These writings need to be studied separately, according to intellectual historians, from Latin-language writings of the same period." ]
2
Binns would be most likely to agree with which one of the following statements concerning the English language writings of Renaissance England traditionally studied by intellectual historians?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_12
[ "These works are easier for modern scholars to analyze than are theological works of the same era.", "These works have seldom been translated into English and thus remain inscrutable to modern scholars, despite the availability of illuminating commentaries.", "These works are difficult for modern scholars to analyze both because of the concepts they develop and the language in which they are written.", "These works constituted the core of an English university education during the Renaissance.", "These works were written mostly by Continental writers and reached English intellectuals only in English translation." ]
2
The information in the passage suggests which one of the following concerning late-Renaissance scientific works written in Latin?
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_13
[ "illustrate the range of difficulty in Renaissance Latin writing, from relatively straightforward to very difficult", "illustrate the differing scholarly attitudes toward Renaissance writers who wrote in Latin and those who wrote in English", "illustrate the fact that the concerns of English writers of the Renaissance differed from the concerns of their Continental counterparts", "contrast a writer of the Renaissance whose merit has long been recognized with one whose literary worth has only recently begun to be appreciated", "contrast a writer whose Latin writings have been the subject of illuminating scholarship with one whose Latin writings have been neglected by philologists" ]
4
The author of the passage mentions the poet Milton and the scientist Newton primarily in order to
In Intellectual Culture in Elizabethan and Jacobean England, J. W. Binns asserts that the drama of Shakespeare, the verse of Marlowe, and the prose of Sidney—all of whom wrote in English—do not alone represent the high culture of Renaissance (roughly sixteenth- and seventeenth-century) England. Latin, the language of ancient Rome, continued during this period to be the dominant form of expression for English intellectuals, and works of law, theology, and science written in Latin were, according to Binns, among the highest achievements of the Renaissance. However, because many academic specializations do not overlap, many texts central to an interpretation of early modern English culture have gone unexamined. Even the most learned students of Renaissance Latin generally confine themselves to humanistic and literary writings in Latin. According to Binns, these language specialists edit and analyze poems and orations, but leave works of theology and science, law and medicine—the very works that revolutionized Western thought—to "specialists" in those fields, historians of science, for example, who lack philological training. The intellectual historian can find ample guidance when reading the Latin poetry of Milton, but little or none when confronting the more alien and difficult terminology, syntax, and content of the scientist Newton. Intellectual historians of Renaissance England, by contrast with Latin language specialists, have surveyed in great detail the historical, cosmological, and theological battles of the day, but too often they have done so on the basis of texts written in or translated into English. Binns argues that these scholars treat the English-language writings of Renaissance England as an autonomous and coherent whole, underestimating the influence on English writers of their counterparts on the European Continent. In so doing they ignore the fact that English intellectuals were educated in schools and universities where they spoke and wrote Latin, and inhabited as adults an intellectual world in which what happened abroad and was recorded in Latin was of great importance. Writers traditionally considered characteristically English and modern were steeped in Latin literature and in the esoteric concerns of late Renaissance humanism (the rediscovery and study of ancient Latin and Greek texts), and many Latin works by Continental humanists that were not translated at the time into any modern language became the bases of classic English works of literature and scholarship. These limitations are understandable. No modern classicist is trained to deal with the range of problems posed by a difficult piece of late Renaissance science; few students of English intellectual history are trained to read the sort of Latin in which such works were written. Yet the result of each side's inability to cross boundaries has been that each presents a distorted reading of the intellectual culture of Renaissance England.
200112_2-RC_2_14
[ "an enumeration of new approaches", "contrasting views of disparate theories", "a summary of intellectual disputes", "a discussion of a significant deficiency", "a correction of an author's misconceptions" ]
3
The author of the passage is primarily concerned with presenting which one of the following?
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_15
[ "Both the solute concentration and the volume of an animal's blood plasma must be kept within relatively narrow ranges.", "Behavioral responses to changes in an animal's blood plasma can compensate for physiological malfunction, allowing the body to avoid dehydration.", "The effect of hormones on animal behavior and physiology has only recently been discovered.", "Behavioral and physiological responses to major changes in osmolality of an animal's blood plasma are hormonally influenced and complement one another.", "The mechanisms regulating reproduction are similar to those that regulate thirst and sodium appetite." ]
3
Which one of the following best states the main idea of the passage?
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_16
[ "review briefly the history of research into the relationships between gonadal and peptide hormones that has led to the present discussion", "decry the fact that previous research has concentrated on the relatively minor issue of the relationships between hormones and behavior", "establish the emphasis of earlier research into the connections between hormones and behavior before elaborating on the results described in the passage", "introduce a commonly held misconception about the relationships between hormones and behavior before refuting it with the results described in the passage", "summarize the main findings of recent research described in the passage before detailing the various procedures that led to those findings" ]
2
The author of the passage cites the relationship between gonadal hormones and reproductive behavior in order to
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_17
[ "The amount secreted depends on the level of steroid hormones in the blood.", "The amount secreted is important for maintaining homeostasis in cases of both increased and decreased osmolality.", "It works in conjunction with steroid hormones in increasing plasma volume.", "It works in conjunction with steroid hormones in regulating sodium appetite.", "It is secreted after an animal becomes thirsty, as a mechanism for diluting plasma osmolality." ]
1
It can be inferred from the passage that which one of the following is true of vasopressin?
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_18
[ "present new information", "question standard assumptions", "reinterpret earlier findings", "advocate a novel theory", "outline a new approach" ]
0
The primary function of the passage as a whole is to
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_19
[ "Hunger is diminished.", "Thirst is initiated.", "Vasopressin is secreted.", "Water is excreted.", "Sodium is consumed." ]
0
According to the passage, all of the following typically occur in the homeostasis of blood-plasma osmolality EXCEPT:
Discussions of how hormones influence behavior have generally been limited to the effects of gonadal hormones on reproductive behavior and have emphasized the parsimonious arrangement whereby the same hormones involved in the biology of reproduction also influence sexual behavior. It has now become clear, however, that other hormones, in addition to their recognized influence on biological functions, can affect behavior. Specifically, peptide and steroid hormones involved in maintaining the physiological balance, or homeostasis, of body fluids also appear to play an important role in the control of water and salt consumption. The phenomenon of homeostasis in animals depends on various mechanisms that promote stability within the organism despite an inconstant external environment; the homeostasis of body fluids, whereby the osmolality (the concentration of solutes) of blood plasma is closely regulated, is achieved primarily through alterations in the intake and excretion of water and sodium, the two principal components of the fluid matrix that surrounds body cells. Appropriate compensatory responses are initiated when deviations from normal are quite small, thereby maintaining plasma osmolality within relatively narrow ranges. In the osmoregulation of body fluids, the movement of water across cell membranes permits minor fluctuations in the concentration of solutes in extracellular fluid to be buffered by corresponding changes in the relatively larger volume of cellular water. Nevertheless, the concentration of solutes in extracellular fluid may at times become elevated or reduced by more than the allowed tolerances of one or two percent. It is then that complementary physiological and behavioral responses come into play to restore plasma osmolality to normal. Thus, for example, a decrease in plasma osmolality, such as that which occurs after the consumption of water in excess of need, leads to the excretion of surplus body water in the urine by inhibiting secretion from the pituitary gland of vasopressin, a peptide hormone that promotes water conservation in the kidneys. As might be expected, thirst also is inhibited then, to prevent further dilution of body fluids. Conversely, an increase in plasma osmolality, such as that which occurs after one eats salty foods or after body water evaporates without being replaced, stimulates the release of vasopressin, increasing the conservation of water and the excretion of solutes in urine. This process is accompanied by increased thirst, with the result of making plasma osmolality more dilute through the consumption of water. The threshold for thirst appears to be slightly higher than for vasopressin secretion, so that thirst is stimulated only after vasopressin has been released in amounts sufficient to produce maximal water retention by the kidneys—that is, only after osmotic dehydration exceeds the capacity of the animal to deal with it physiologically.
200112_2-RC_3_20
[ "It increases thirst and stimulates sodium appetite.", "It helps prevent further dilution of body fluids.", "It increases the conservation of water in the kidneys.", "It causes minor changes in plasma volume.", "It helps stimulate the secretion of steroid hormones." ]
1
According to the passage, the withholding of vasopressin fulfills which one of the following functions in the restoration of plasma osmolality to normal levels?
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_21
[ "Following the elimination of the apartheid system in South Africa, lawyers, judges, and citizens will need to abandon their posture of opposition to law and design a new and fairer legal system.", "If the new legal system in South Africa is to succeed, lawyers, judges, and citizens must learn to challenge parliamentary decisions based on their individual rights as set out in the new constitution.", "Whereas in the past the parliament was both the initiator and arbiter of laws in South Africa, under the new constitution these powers will be assumed by a constitutional court.", "Despite the lack of relevant legal precedents and the public's antagonistic relation to the law, South Africa is moving from a legal system where the parliament is the final authority to one where the rights of citizens are protected by a constitution.", "While South Africa's judges will have to look initially to other countries to provide interpretations for its new bill of rights, eventually it must develop a body of precedent sensitive to the needs of its own citizens." ]
3
Which one of the following most completely and accurately states the main point of the passage?
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_22
[ "to describe the role of the parliament under South Africa's new constitution", "to argue for returning final legal authority to the parliament", "to contrast the character of legal practice under the apartheid system with that to be implemented under the new constitution", "to criticize the creation of a court with final authority on constitutional matters", "to explain why a bill of rights was included in the new constitution" ]
2
Which one of the following most accurately describes the author's primary purpose in lines 10–19?
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_23
[ "deep skepticism", "open pessimism", "total indifference", "guarded optimism", "complete confidence" ]
3
The passage suggests that the author's attitude toward the possibility of success for a rights-based legal system in South Africa is most likely one of
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_24
[ "decisions rendered in constitutional court", "challenges from concerned citizens", "new laws passed in the parliament", "provisions in the constitution's bill of rights", "other judges with a more rule-bound approach to the law" ]
2
According to the passage, under the apartheid system the rulings of judges were sometimes counteracted by
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_25
[ "A solution to a problem is identified, several methods of implementing the solution are discussed, and one of the methods is argued for.", "The background to a problem is presented, past methods of solving the problem are criticized, and a new solution is proposed.", "An analysis of a problem is presented, possible solutions to the problem are given, and one of the possible solutions is argued for.", "Reasons are given why a problem has existed, the current state of affairs is described, and the problem is shown to exist no longer.", "A problem is identified, specific manifestations of the problem are given, and an essential element in its solution is presented." ]
4
Which one of the following most accurately describes the organization of the last paragraph of the passage?
With the elimination of the apartheid system, South Africa now confronts the transition to a rights-based legal system in a constitutional democracy. Among lawyers and judges, exhilaration over the legal tools soon to be available is tempered by uncertainty about how to use them. The changes in the legal system are significant, not just for human rights lawyers, but for all lawyers—as they will have to learn a less rule-bound and more interpretative way of looking at the law. That is to say, in the past, the parliament was the supreme maker and arbiter of laws; when judges maderulings with which the parliament disagreed, theparliament simply passed new laws to counteract theirrulings. Under the new system, however, aconstitutional court will hear arguments on allconstitutional matters, including questions of whetherthe laws passed by the parliament are valid in light ofthe individual liberties set out in the constitution's billof rights. This shift will lead to extraordinary changes, for South Africa has never before had a legal system based on individual rights—one in which citizens can challenge any law or administrative decision on the basis of their constitutional rights. South African lawyers are concerned about the difficulty of fostering a rights-based culture in a multiracial society containing a wide range of political and personal beliefs simply by including a bill of rights in the constitution and establishing the means for its defense. Because the bill of rights has been drawn in very general terms, the lack of precedents will make the task of determining its precise meaning a bewildering one. With this in mind, the new constitution acknowledges the need to look to other countries for guidance. But some scholars warn that judges, in their rush to fill the constitutional void, may misuse foreign law—they may blindly follow the interpretations given bills of rights in other countries, not taking into account the circumstances in those countries that led to certain decisions. Nonetheless, these scholars are hopeful that, with patience and judicious decisions, South Africa can use international experience in developing a body of precedent that will address the particular needs of its citizens. South Africa must also contend with the image of the law held by many of its citizens. Because the law in South Africa has long been a tool of racial oppression, many of its citizens have come to view obeying the law as implicitly sanctioning an illegitimate, brutal government. Among these South Africans the political climate has thus been one of opposition, and many see it as their duty to cheat the government as much as possible, whether by not paying taxes or by disobeying parking laws. If a rights-based culture is to succeed, the government will need to show its citizens that the legal system is no longer a tool of oppression but instead a way to bring about change and help further the cause of justice.
200112_2-RC_4_26
[ "Reliance of judges on the interpretations given bills of rights in other countries must be tempered by the recognition that such interpretations may be based on circumstances not necessarily applicable to South Africa.", "Basing interpretations of the South African bill of rights on interpretations given bills of rights in other countries will reinforce the climate of mistrust for authority in South Africa.", "The lack of precedents in South African law for interpreting a bill of rights will likely make it impossible to interpret correctly the bill of rights in the South African constitution.", "Reliance by judges on the interpretations given bills of rights in other countries offers an unacceptable means of attempting to interpret the South African constitution in a way that will meet the particular needs of South African citizens.", "Because bills of rights in other countries are written in much less general terms than the South African bill of rights, interpretations of them are unlikely to prove helpful in interpreting the South African bill of rights." ]
0
Based on the passage, the scholars mentioned in the second paragraph would be most likely to agree with which one of the following statements?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_1
[ "Because trials requiring juries are relative rare, the usefulness of the unanimity requirement does not need to be reexamined.", "The unanimity requirement should be maintained because most hung juries are caused by irresponsible jurors rather than by any flaws in the requirement.", "The problem of hung juries is not a result of flaws in the justice system but of the less than convincing evidence presented in some cases.", "The unanimity requirement should be maintained, but it is only effective if jurors conscientiously do the job they have been asked to do.", "Because its material costs are outweighed by what it contributes to the fairness of jury trials, the unanimity requirement should not be rescinded." ]
4
Which one of the following most accurately states the main point of the passage?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_2
[ "cursory appreciation", "neutral interest", "cautious endorsement", "firm support", "unreasoned reverence" ]
3
Which one of the following most accurately describes the author's attitude toward the unanimity requirement?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_3
[ "The risk of unjust verdicts is serious enough to warrant strong measures to avoid it.", "Fairness in jury trials is crucial and so judges must be extremely thorough in order to ensure it.", "Careful adherence to the unanimity requirement will eventually eliminate unjust verdicts.", "Safeguards must be in place because not all citizens called to jury duty perform their role responsibly.", "The jury system is inherently flawed and therefore unfairness cannot be eliminated but only reduced." ]
0
Which one of the following principles can most clearly be said to underlie the author's arguments in the third paragraph?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_4
[ "It is not surprising, then, that the arguments presented by the critics of the unanimity requirement grow out of a separate tradition from that embodied in the unanimity requirement.", "Similarly, if there is a public debate concerning the unanimity requirement, public faith in the requirement will be strengthened.", "The opinion of each juror is as essential to the pursuit of justice as the universal vote is to the functioning of a true democracy.", "Unfortunately, because some lawmakers have characterized hung juries as intolerable, the integrity of the entire legal system has been undermined.", "But even without the unanimity requirement, fair trials and fair verdicts will occur more frequently as the methods of prosecutors and defense attorneys become more scientific." ]
2
Which one of the following sentences could most logically be added to the end of the last paragraph of the passage?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_5
[ "obstinate", "suspicious", "careful", "conscientious", "naive" ]
0
Which one of the following could replace the term "recalcitrant" (line 16) without a substantial change in the meaning of the critics' claim?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_6
[ "Only verdicts in very close cases would be affected.", "The responsibility felt by jurors to be respectful to one another would be lessened.", "Society's confidence in the fairness of the verdicts would be undermined.", "The problem of hung juries would not be solved but would surface less frequently.", "An important flaw thus would be removed from the criminal justice system." ]
2
The author explicitly claims that which one of the following would be a result of allowing a juror's dissenting opinion to be dismissed?
The jury trial is one of the handful of democratic institutions that allow individual citizens, rather than the government, to make important societal decisions. A crucial component of the jury trial, at least in serious criminal cases, is the rule that verdicts be unanimous among the jurors (usually twelve in number). Under this requirement, dissenting jurors must either be convinced of the rightness of the prevailing opinion, or, conversely, persuade the other jurors to change their minds. In either instance, the unanimity requirement compels the jury to deliberate fully and truly before reaching its verdict. Critics of the unanimity requirement, however, see it as a costly relic that extends the deliberation process and sometimes, in a hung (i.e., deadlocked) jury, brings it to a halt at the hands of a single, recalcitrant juror, forcing the judge to order a retrial. Some of these critics recommend reducing verdict requirements to something less than unanimity, so that one or even two dissenting jurors will not be able to force a retrial. But the material costs of hung juries do not warrant losing the benefit to society of the unanimous verdict. Statistically, jury trials are relatively rare; the vast majority of defendants do not have the option of a jury trial or elect to have a trial without a jury ⎯ or they plead guilty to the original or a reduced charge. And the incidence of hung juries is only a small fraction of the already small fraction of cases that receive a jury trial. Furthermore, that juries occasionally deadlock does not demonstrate a flaw in the criminal justice system, but rather suggests that jurors are conscientiously doing the job they have been asked to do. Hung juries usually occur when the case is very close ⎯ that is, when neither side has presented completely convincing evidence ⎯ and although the unanimity requirement may sometimes lead to inconclusive outcomes, a hung jury is certainly preferable to an unjust verdict. Requiring unanimity provides a better chance that a trial, and thus a verdict, will be fair. Innocent people are already occasionally convicted ⎯ perhaps in some cases because jurors presume that anyone who has been brought to trial is probably guilty ⎯ and eliminating the unanimity requirement would only increase the opportunity for such mistakes. Furthermore, if a juror's dissenting opinion can easily be dismissed, an important and necessary part of the deliberation process will be lost, for effective deliberation requires that each juror's opinion be given a fair hearing. Only then can the verdict reached by the jury be said to represent all of its members, and if even one juror has doubts that are dismissed out of hand, society's confidence that a proper verdict has been reached would be undermined.
200206_1-RC_1_7
[ "Hung juries most often result from an error in judgment on the part of one juror.", "Aside from the material costs of hung juries, the criminal justice system has few flaws.", "The fact that jury trials are so rare renders any flaws in the jury system insignificant.", "Hung juries are acceptable and usually indicate that the criminal justice system is functioning properly.", "Hung juries most often occur when one juror's opinion does not receive a fair hearing." ]
3
It can be inferred from the passage that the author would be most likely to agree with which one of the following?
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_8
[ "It is unlikely that quantum mechanics would have been developed without the theoretical contributions of Marie Curie toward an understanding of the nature of radioactivity.", "Although later shown to be incomplete and partially inaccurate, Marie Curie's investigations provided a significant step forward on the road to the eventual explanation of radioactivity.", "Though the scientific achievements of Marie Curie were impressive in scope, her career is blemished by her failure to determine the mechanism of radioactivity.", "The commitment of Marie Curie and other physicists of her time to the physicists' model of the atom prevented them from conducting fruitful investigations into radioactivity.", "Although today's theories have shown it to be inconclusive, Marie Curie's research into the sources and nature of radioactivity helped refute the chemists' model of the atom." ]
1
Which one of the following most accurately states the central idea of the passage?
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_9
[ "The critics fail to take into account the obstacles Curie faced in dealing with the scientific community of her time.", "The critics do not appreciate that the eventual development of quantum mechanics depended on Curie's conjecture that radiating substances can lose atoms.", "The critics are unaware of the differing conceptions of the atom held by physicists and chemists.", "The critics fail to appreciate the importance of the historical context in which Curie's scientific conclusions were reached.", "The critics do not comprehend the intricate reasoning that Curie used in discovering polonium and radium." ]
3
The passage suggests that the author would be most likely to agree with which one of the following statements about the contemporary critics of Curie's studies of radioactivity?
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_10
[ "Pitchblende was not known by scientists to contain any radioactive element besides uranium.", "Radioactivity was suspected by scientists to arise from the overall structure of pitchblende rather than from particular elements in it.", "Physicists and chemists had developed rival theories regarding the cause of radiation.", "Research was not being conducted in connection with the question of whether or not matter is composed of atoms.", "The majority of physicists believed uranium to be the sole source or radioactivity." ]
0
The passage implies which one of the following with regard to the time at which Curie began studying radioactivity?
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_11
[ "summarize some aspects of one scientist's work and defend it against recent criticism", "describe a scientific dispute and argue for the correctness of an earlier theory", "outline a currently accepted scientific theory and analyze the evidence that led to its acceptance", "explain the mechanism by which a natural phenomenon occurs and summarize the debate that gave rise to this explanation", "discover the antecedents of a scientific theory and argue that the theory is not a genuine advance over its forerunners" ]
0
The author's primary purpose in the passage is to
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_12
[ "narrate the progress of turn-of-the-century studies of radioactivity", "present a context for the conflict between physicists and chemists", "provide the factual background for an evaluation of Curie's work", "outline the structure of the author's central argument", "identify the error in Curie's work that undermines its usefulness" ]
2
The primary function of the first paragraph of the passage is to
Spurred by the discovery that a substance containing uranium emitted radiation, Marie Curie began studying radioactivity in 1897. She first tested gold and copper for radiation but found none. She then tested pitchblende, a mineral that was known to contain uranium, and discovered that it was more radioactive than uranium. Acting on the hypothesis that pitchblende must contain at least one other radioactive element, Curie was able to isolate a pair of previously unknown elements, polonium and radium. Turning her attention to the rate of radioactive emission, she discovered that uranium emitted radiation at a consistent rate, even if heated or dissolved. Based on these results, Curie concluded that the emission rate for a given element was constant. Furthermore, because radiation appeared to be spontaneous, with no discernible difference between radiating and nonradiating elements, she was unable to postulate a mechanism by which to explain radiation. It is now known that radiation occurs when certain isotopes (atoms of the same element that differ slightly in their atomic structure) decay, and that emission rates are not constant but decrease very slowly with time. Some critics have recently faulted Curie for not reaching these conclusions herself, but it would have been impossible for Curie to do so given the evidence available to her. While relatively light elements such as gold and copper occasionally have unstable (i.e., radioactive) isotopes, radioactive isotopes of most of these elements are not available in nature because they have largely finished decaying and so have become stable. Conversely, heavier elements such as uranium, which decay into lighter elements in a process that takes billions of years, are present in nature exclusively in radioactive form. Furthermore, we must recall that in Curie's time the nature of the atom itself was still being debated. Physicists believed that matter could not be divided indefinitely but instead would eventually be reduced to its indivisible components. Chemists, on the other hand, observing that chemical reactions took place as if matter was composed of atomlike particles, used the atom as a foundation for conceptualizing and describing such reactions—but they were not ultimately concerned with the question of whether or not such indivisible atoms actually existed. As a physicist, Curie conjectured that radiating substances might lose mass in the form of atoms, but this idea is very different from the explanation eventually arrived at. It was not until the 1930s that advances in quantum mechanics overthrew the earlier understanding of the atom and showed that radiation occurs because the atoms themselves lose mass—a hypothesis that Curie, committed to the indivisible atom, could not be expected to have conceived of. Moreover, not only is Curie's inability to identify the mechanism by which radiation occurs understandable, it is also important to recognize that it was Curie's investigation of radiation that paved the way for the later breakthroughs.
200206_1-RC_2_13
[ "the physical process that underlies a phenomenon", "the experimental apparatus in which a phenomenon arises", "the procedure scientists use to bring about the occurrence of a phenomenon", "the isotopes of an element needed to produce a phenomenon", "the scientific theory describing a phenomenon" ]
0
Which one of the following most accurately expresses the meaning of the word "mechanism" as used by the author in the last sentence of the first paragraph?
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_14
[ "The possibility of successfully blending different cultural forms is demonstrated by jazz's ability to incorporate European influences.", "The technique of blending the artistic concerns of two cultures could be an effective tool for social and political action.", "Due to the success of Invisible Man, Ellison was able to generate a renewed interest in and greater appreciation for jazz.", "The protagonist in Invisible Man illustrates the difficulty of combining the concerns of African Americans and concerns thought to be European in origin.", "Ellison's literary technique, though effective, is unfortunately too esoteric and complex to generate a large audience." ]
0
It can be inferred from the passage that the author most clearly holds which one of the following views?
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_15
[ "created a positive effect on the social conditions of the time", "provided a historical record of the plight of African Americans", "contained a tribute to the political contributions of African American predecessors", "prompted a necessary and further separation of American literature from European literary style", "generated a large audience made up of individuals from many cultural backgrounds" ]
0
Based on the passage, Ellison's critics would most likely have responded favorably to Invisible Man if it had
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_16
[ "a general tendency within the arts whereby certain images and themes recur within the works of certain cultures", "an obvious separation within the art community resulting from artists' differing aesthetic principles", "the cultural isolation artists feel when they address issues of individual identity", "the cultural obstacles that affect an audience's appreciation of art", "an expectation placed on an artist to uphold a specific cultural agenda in the creation of art" ]
4
The expression "cultural segregation in the arts" (lines 22-23) most clearly refers to
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_17
[ "summarize the thematic concerns of an artist in relation to other artists within the discipline", "affirm the importance of two artistic disciplines in relation to cultural concerns", "identify the source of the thematic content of one artist's work", "celebrate one artistic discipline by viewing it from the perspective of an artist from another discipline", "introduce a context within which the work of one artist may be more fully illuminated." ]
4
The primary purpose of the third paragraph is to
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_18
[ "It is not accessible to a wide audience.", "It is the most complex of modern musical forms.", "It embraces other forms of music.", "It avoids political themes.", "It has influenced much of contemporary literature." ]
2
Which one of the following statements about jazz is made in the passage?
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_19
[ "Audiences respond more favorably to art that has no political content.", "Groundless criticism of an artist's work can hinder an audience's reception of the work.", "Audiences have the capacity for empathy required to appreciate unique and expressive art.", "The most conscientious members of any audience are those who are aware of the specific techniques employed by the artist.", "Most audience members are bound by their cultural upbringing to view art from that cultural perspective." ]
2
It can be inferred from the passage that Ellison most clearly holds which one of the following views regarding an audience's relationship to works of art?
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_20
[ "make a case that a certain novelist is one of the most important novelists of the twentieth century", "demonstrate the value of using jazz as an illustration for further understanding the novels of a certain literary trend", "explain the relevance of a particular work and its protagonist to the political and social issues of the time", "defend the work of a certain novelist against criticism that it should have addressed political and social issues", "distinguish clearly between the value of art for art's sake and art for purposes such as political agendas" ]
3
The primary purpose of the passage is to
Published in 1952, Invisible Man featured a protagonist whose activities enabled the novel's author, Ralph Ellison, to explore and to blend themes specifically tied to the history and plight of African Americans with themes, also explored by many European writers with whose works Ellison was familiar, about the fractured, evanescent quality of individual identity and character. For this thematic blend, Ellison received two related criticisms: that his allegiance to the concerns of the individual prevented him from directing his art more toward the political action that critics believed was demanded by his era's social and political state of affairs; and that his indulging in European fictional modes lessened his contribution to the development of a distinctly African American novelistic style. Ellison found these criticisms to voice a common demand, namely that writers should censor themselves and sacrifice their individuality for supposedly more important political and cultural purposes. He replied that it demeans a people and its artists to suggest that a particular historical situation requires cultural segregation in the arts. Such a view characterizes all artists as incapable of seeing the world—with all its subtleties and complications—in unique yet expressive ways, and it makes the narrow assumption that audiences are capable of viewing the world only from their own perspectives. Models for understanding Invisible Man that may be of more help than those employed by its critics can be found in Ellison's own love for and celebration of jazz. Jazz has never closed itself off from other musical forms, and some jazz musicians have been able to take the European-influenced songs of U.S. theater and transform them into musical pieces that are unique and personal but also expressive of African American culture. In like manner, Ellison avoided the mere recapitulation of existing literary forms as well as the constraints of artistic isolation by using his work to explore and express the issues of identity and character that had so interested European writers. Further, jazz, featuring solos that, however daring, remain rooted in the band's rhythm section, provides a rich model for understanding the relationship of artist to community and parallels the ways the protagonist's voice in Invisible Man is set within a wider communal context. Ellison's explorations in the novel, often in the manner of loving caricature, of the ideas left him by both European and African American predecessors are a form of homage to them and thus ameliorate the sense of alienation he expresses through the protagonist. And even though Invisible Man's protagonist lives alone in a basement, Ellison proves that an individual whose unique voice is the result of the transmutation of a cultural inheritance can never be completely cut off from the community.
200206_1-RC_3_21
[ "Did Ellison himself enjoy jazz?", "What themes in Invisible Man were influenced by themes prevalent in jazz?", "What was Ellison's response to criticism concerning the thematic blend in Invisible Man?", "From what literary tradition did some of the ideas explored in Invisible Man come?", "What kind of music did some jazz musicians use in creating their works?" ]
1
The passage provides information to answer each of the following questions EXCEPT:
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake.
200206_1-RC_4_22
[ "the country's actions are consistent with previously accepted views of the psychology of risk-taking", "the new research findings indicate that the country from which the territory has been seized probably weighs the risk factors involved in the situation similarly to the way in which they are weighed by the aggressor nation", "in spite of surface appearances to the contrary, the new research findings suggest that the objective value of the potential gain is overridden by the risks", "the facts of the situation show that the government is motivated by factors other than objective calculation of the measurable risks and probable benefits", "the country's leaders most likely subjectively perceive the territory as having been taken from their country in the past" ]
0
Suppose that a country seizes a piece of territory with great mineral wealth that is claimed by a neighboring country, with a concomitant risk of failure involving moderate but easily tolerable harm in the long run. Given the information in the passage, the author would most likely say that
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake.
200206_1-RC_4_23
[ "the introduction to a thought experiment whose results the author expects will vary widely among different people", "a rhetorical question whose assumed answer is in conflict with the previously accepted view concerning risk-taking behavior", "the basis for an illustration of how the previously accepted view concerning risk-taking behavior applies accurately to some types of situations", "a suggestion that the discrepancies between subjective and objective valuations of possible decision outcomes are more illusive than real", "a transitional device to smooth an otherwise abrupt switch from discussion of previous theories to discussion of some previously unaccepted research findings" ]
2
The question in lines 24-27functions primarily as
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake.
200206_1-RC_4_24
[ "When states try to regain losses through risky conflict, they generally are misled by inadequate or inaccurate information as to the risks that they run in doing so.", "Government decision makers subjectively evaluate the acceptability of risks involving national assets in much the same way that they would evaluate risks involving personal assets.", "A new method for predicting and mediating international conflict has emerged from a synthesis of the fields of economics and psychology.", "Truly rational decision making is a rare phenomenon in international crises and can, ironically, lead to severe consequences for those who engage in it.", "Contrary to previous assumptions, people are more likely to take substantial risks when their subjective assessments of expected benefits match or exceed the objectively measured costs." ]
1
It can most reasonably be inferred from the passage that the author would agree with which one of the following statements?
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake.
200206_1-RC_4_25
[ "a psychological analysis of the motives involved in certain types of collective decision making in the presence of conflict", "a presentation of a psychological hypothesis which is then subjected to a political test case", "a suggestion that psychologists should incorporate the findings of political scientists into their research", "an examination of some new psychological considerations regarding risk and their application to another field of inquiry", "a summary of two possible avenues for understanding international crises and conflicts" ]
3
The passage can be most accurately described as
Recent investigations into the psychology of decision making have sparked interest among scholars seeking to understand why governments sometimes take gambles that appear theoretically unjustifiable on the basis of expected costs and benefits. Researchers have demonstrated some significant discrepancies between objective measurements of possible decision outcomes and the ways in which people subjectively value such possible results. Many of these discrepancies relate to the observation that a possible outcome perceived as a loss typically motivates more strongly than the prospect of an equivalent gain. Risk-taking is thus a more common strategy for those who believe they will lose what they already possess than it is for those who wish to gain something they do not have. Previously,the notion that rational decision makers prefer risk-avoiding choices was considered to apply generally,epitomized by the assumption of many economists that entrepreneurs and consumers will choose a risky venture over a sure thing only when the expected measurable value of the outcome is sufficiently high to compensate the decision maker for taking the risk.What is the minimum prize that would be required to make a gamble involving a 50 percent chance of losing $100 and a 50 percent chance of winning the prize acceptable? It is commonplace that the pleasure of winning a sum of money is much less intense than the pain of losing the same amount; accordingly, such a gamble would typically be accepted only when the possible gain greatly exceeds the possible loss.Research subjects do,in fact,commonly judge that a 50 percent chance to lose $100 is unacceptable unless it is combined with an equal chance to win more than $300. Nevertheless, the recent studies indicate that risk-accepting strategies are common when the alternative to a sure loss is a substantial chance of losing an even larger amount, coupled with some chance—even a small one—of losing nothing. Such observations are quite salient to scholars of international conflict and crisis. For example, governments typically are cautious in foreign policy initiatives that entail risk, especially the risk of armed conflict. But nations also often take huge gambles to retrieve what they perceive to have been taken from them by other nations. This type of motivation, then, can lead states to take risks that far outweigh the objectively measurable value of the lost assets. For example, when Britain and Argentina entered into armed conflict in 1982 over possession of the Falkland Islands—or Malvinas, as they are called in Spanish— each viewed the islands as territory that had been taken from them by the other; thus each was willing to commit enormous resources—and risks—to recapturing them. In international affairs, it is vital that each actor in such a situation understand the other's subjective view of what is at stake.
200206_1-RC_4_26
[ "Researchers have previously been too willing to accept the claims that subjects make about their preferred choices in risk-related decision problems.", "There is inadequate research support for the hypothesis that except when a gamble is the only available means for averting an otherwise certain loss, people typically are averse to risk-taking.", "It can reasonably be argued that the risk that Britain accepted in its 1982 conflict with Argentina outweighed the potential objectively measurable benefit of that venture.", "The new findings suggest that because of the subjective elements involved, governmental strategies concerning risks of loss in international crises will remain incomprehensible to outside observers.", "Moderate risks in cases involving unavoidable losses are often taken on the basis of reasoning that diverges markedly from that which was studied in the recent investigations." ]
2
The passage most clearly suggests that the author would agree with which one of the following statements?
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_1
[ "Despite extensive evidence that native populations had been burning North and South American forests extensively before 1492, some scholars persist in claiming that such burning was either infrequent or the result of natural causes.", "In opposition to the widespread belief that in 1492 the Western Hemisphere was uncultivated, scholars unanimously agree that native populations were substantially altering North and South American forests well before the arrival of Europeans.", "Although some scholars minimize the scope and importance of the burning of forests engaged in by native populations of North and South America before 1492, evidence of the frequency and impact of such burning is actually quite extensive.", "Where scholars had once believed that North and South American forests remained uncultivated until the arrival of Europeans, there is now general agreement that native populations had been cultivating the forests since well before 1492.", "While scholars have acknowledged that North and South American forests were being burned well before 1492, there is still disagreement over whether such burning was the result of natural causes or of the deliberate actions of native populations." ]
2
Which one of the following most accurately expresses the main idea of the passage?
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_2
[ "numerous types of hardwood trees", "extensive herbaceous undergrowth", "a variety of fire-tolerant plants", "various stages of ecological maturity", "grassy openings such as meadows or glades" ]
0
It can be inferred that a forest burned as described in the passage would have been LEAST likely to display
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_3
[ "scrub oak forests in the southeastern U.S.", "slash pine forests in the southeastern U.S.", "pine forests in Guatemala at high elevations", "pine forests in Mexico at high elevations", "pine forests in Nicaragua at low elevations" ]
4
Which one of the following is a type of forest identified by the author as a product of controlled burning in recent times?
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_4
[ "extensive homogeneous forests at high elevation", "extensive homogeneous forests at low elevation", "extensive heterogeneous forests at high elevation", "extensive heterogeneous forests at low elevation", "extensive sedimentary charcoal accumulations at high elevation" ]
1
Which one of the following is presented by the author as evidence of controlled burning in the tropics before the arrival of Europeans?
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_5
[ "The long-term effects of controlled burning could just as easily have been caused by natural fires.", "Herbaceous undergrowth prevents many forests from reaching full maturity.", "European settlers had little impact on the composition of the ecosystems in North and South America.", "Certain species of plants may not have been as widespread in North America without controlled burning.", "Nicaraguan pine forests could have been created either by natural fires or by controlled burning." ]
3
With which one of the following would the author be most likely to agree?
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_6
[ "the similar characteristics of fires in different regions", "the simultaneous presence of forests at varying stages of maturity", "the existence of herbaceous undergrowth in certain forests", "the heavy accumulation of charcoal near populous settlements", "the presence of meadows and glades in certain forests" ]
0
As evidence for the routine practice of forest burning by native populations before the arrival of Europeans, the author cites all of the following EXCEPT:
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_7
[ "forest clearing followed by controlled burning of forests", "tropical rain forest followed by pine forest", "European settlement followed by abandonment of land", "homogeneous pine forest followed by mixed hardwoods", "pine forests followed by established settlements" ]
3
The "succession" mentioned in line 57 refers to
The myth persists that in 1492 the Western Hemisphere was an untamed wilderness and that it was European settlers who harnessed and transformed its ecosystems. But scholarship shows that forests, in particular, had been altered to varying degrees well before the arrival of Europeans. Native populations had converted much of the forests to successfully cultivated stands, especially by means of burning. Nevertheless, some researchers have maintained that the extent, frequency, and impact of such burning was minimal. One geographer claims that climatic change could have accounted for some of the changes in forest composition; another argues that burning by native populations was done only sporadically, to augment the effects of natural fires. However, a large body of evidence for the routine practice of burning exists in the geographical record. One group of researchers found, for example, that sedimentary charcoal accumulations in what is now the northeastern United States are greatest where known native American settlements were greatest. Other evidence shows that, while the characteristics and impact of fires set by native populations varied regionally according to population size, extent of resource management techniques, and environment, all such fires had markedly different effects on vegetation patterns than did natural fires. Controlled burning created grassy openings such as meadows and glades. Burning also promoted a mosaic quality to North and South American ecosystems, creating forests in many different stages of ecological development. Much of the mature forestland was characterized by open, herbaceous undergrowth, another result of the clearing brought about by burning. In North America, controlled burning created conditions favorable to berries and other fire-tolerant and sun-loving foods. Burning also converted mixed stands of trees to homogeneous forest, for example the longleaf, slash pine, and scrub oak forests of the southeastern U.S. Natural fires do account for some of this vegetation, but regular burning clearly extended and maintained it. Burning also influenced forest composition in the tropics, where natural fires are rare. An example is the pine-dominant forests of Nicaragua, where warm temperatures and heavy rainfall naturally favor mixed tropical or rain forests. While there are extensive pine forests in Guatemala and Mexico, these primarily grow in cooler, drier, higher elevations, regions where such vegetation is in large part natural and even prehuman. Today, the Nicaraguan pines occur where there has been clearing followed by regular burning, and the same is likely to have occurred in the past: such forests were present when Europeans arrived and were found only in areas where native settlements were substantial; when these settlements were abandoned, the land returned to mixed hardwoods. This succession is also evident elsewhere in similar low tropical elevations in the Caribbean and Mexico.
200210_3-RC_1_8
[ "refute certain researchers' views", "support a common belief", "counter certain evidence", "synthesize two viewpoints", "correct the geographical record" ]
0
The primary purpose of the passage is to
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_9
[ "Although some argue that the authority of legal systems is purely intellectual, these systems possess a degree of institutional authority due to their ability to enforce acceptance of badly reasoned or socially inappropriate judicial decisions.", "Although some argue that the authority of legal systems is purely institutional, these systems are more correctly seen as vehicles for applying the intellectual authority of the law while possessing no coercive power of their own.", "Although some argue that the authority of legal systems is purely intellectual, these systems in fact wield institutional authority by virtue of the fact that intellectual authority reduces to institutional authority.", "Although some argue that the authority of legal systems is purely institutional, these systems possess a degree of intellectual authority due to their ability to reconsider badly reasoned or socially inappropriate judicial decisions.", "Although some argue that the authority of legal systems is purely intellectual, these systems in fact wield exclusively institutional authority in that they possess the power to enforce acceptance of badly reasoned or socially inappropriate judicial decisions." ]
3
Which one of the following most accurately states the main idea of the passage?
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_10
[ "fail to gain institutional consensus", "fail to challenge institutional beliefs", "fail to conform to the example of precedent", "fail to convince by virtue of good reasoning", "fail to gain acceptance except by coercion" ]
0
That some arguments "never receive institutional imprimatur" (lines 22–23) most likely means that these arguments
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_11
[ "Judges often act under time constraints and occasionally render a badly reasoned or socially inappropriate decision.", "In some legal systems, the percentage of judicial decisions that contain faulty reasoning is far higher than it is in other legal systems.", "Many socially inappropriate legal decisions are thrown out by judges only after citizens begin to voice opposition to them.", "In some legal systems, the percentage of judicial decisions that are reconsidered and revised is far higher than it is in other legal systems.", "Judges are rarely willing to rectify the examples of faulty reasoning they discover when reviewing previous legal decisions." ]
4
Which one of the following, if true, most challenges the author's contention that legal systems contain a significant degree of intellectual authority?
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_12
[ "Institutional authority may depend on coercion; intellectual authority never does.", "Intellectual authority may accept well-reasoned arguments; institutional authority never does.", "Institutional authority may depend on convention; intellectual authority never does.", "Intellectual authority sometimes challenges institutional beliefs; institutional authority never does.", "Intellectual authority sometimes conflicts with precedent; institutional authority never does." ]
1
Given the information in the passage, the author is LEAST likely to believe which one of the following?
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_13
[ "distinguish the notion of institutional authority from that of intellectual authority", "give an example of an argument possessing intellectual authority that did not prevail in its own time", "identify an example in which the ascription of musical genius did not withstand the test of time", "illustrate the claim that assessing intellectual authority requires an appeal to institutional authority", "demonstrate that the authority wielded by the arbiters of musical genius is entirely institutional" ]
3
The author discusses the example from musicology primarily in order to
Intellectual authority is defined as the authority of arguments that prevail by virtue of good reasoning and do not depend on coercion or convention. A contrasting notion, institutional authority, refers to the power of social institutions to enforce acceptance of arguments that may or may not possess intellectual authority. The authority wielded by legal systems is especially interesting because such systems are institutions that nonetheless aspire to a purely intellectual authority. One judge goes so far as to claim that courts are merely passive vehicles for applying the intellectual authority of the law and possess no coercive powers of their own. In contrast, some critics maintain that whatever authority judicial pronouncements have is exclusively institutional. Some of these critics go further, claiming that intellectual authority does not really exist—i.e., it reduces to institutional authority. But it can be countered that these claims break down when a sufficiently broad historical perspective is taken: Not all arguments accepted by institutions withstand the test of time, and some well-reasoned arguments never receive institutional imprimatur. The reasonable argument that goes unrecognized in its own time because it challenges institutional beliefs is common in intellectual history; intellectual authority and institutional consensus are not the same thing. But, the critics might respond, intellectual authority is only recognized as such because of institutional consensus. For example, if a musicologist were to claim that an alleged musical genius who, after several decades, had not gained respect and recognition for his or her compositions is probably not a genius, the critics might say that basing a judgment on a unit of time— "several decades" —is an institutional rather than an intellectual construct. What, the critics might ask, makes a particular number of decades reasonable evidence by which to judge genius? The answer, of course, is nothing, except for the fact that such institutional procedures have proved useful to musicologists in making such distinctions in the past. The analogous legal concept is the doctrine of precedent, i.e., a judge's merely deciding a case a certain way becoming a basis for deciding later cases the same way—a pure example of institutional authority. But the critics miss the crucial distinction that when a judicial decision is badly reasoned, or simply no longer applies in the face of evolving social standards or practices, the notion of intellectual authority is introduced: judges reconsider, revise, or in some cases throw out the decision. The conflict between intellectual and institutional authority in legal systems is thus played out in the reconsideration of decisions, leading one to draw the conclusion that legal systems contain a significant degree of intellectual authority even if the thrust of their power is predominantly institutional.
200210_3-RC_2_14
[ "It is the only tool judges should use if they wish to achieve a purely intellectual authority.", "It is a useful tool in theory but in practice it invariably conflicts with the demands of intellectual authority.", "It is a useful tool but lacks intellectual authority unless it is combined with the reconsidering of decisions.", "It is often an unreliable tool because it prevents judges from reconsidering the intellectual authority of past decisions.", "It is an unreliable tool that should be abandoned because it lacks intellectual authority." ]
2
Based on the passage, the author would be most likely to hold which one of the following views about the doctrine of precedent?
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_15
[ "Abrams argues that historical sociology rejects the claims of sociologists who assert that the sociological concept of structuring cannot be applied to the interactions between individuals and history.", "Abrams argues that historical sociology assumes that, despite the views of sociologists to the contrary, history influences the social contingencies that affect individuals.", "Abrams argues that historical sociology demonstrates that, despite the views of sociologists to the contrary, social structures both influence and are influenced by the events of history.", "Abrams describes historical sociology as a discipline that unites two approaches taken by sociologists to studying the formation of societies and applies the resulting combined approach to the study of history.", "Abrams describes historical sociology as an attempt to compensate for the shortcomings of traditional historical methods by applying the methods established in sociology." ]
3
Which one of the following most accurately states the central idea of the passage?
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_16
[ "Only if they adhere to this structure, Abrams believes, can historical sociologists conclude with any certainty that the events that constitute the historical record are influenced by the actions of individuals.", "Only if they adhere to this structure, Abrams believes, will historical sociologists be able to counter the standard sociological assumption that there is very little connection between history and individual agency.", "Unless they can agree to adhere to this structure, Abrams believes, historical sociologists risk having their discipline treated as little more than an interesting but ultimately indefensible adjunct to history and sociology.", "By adhering to this structure, Abrams believes, historical sociologists can shed light on issues that traditional sociologists have chosen to ignore in their one-sided approaches to the formation of societies.", "By adhering to this structure, Abrams believes, historical sociologists will be able to better portray the complex connections between human agency and history." ]
4
Given the passage's argument, which one of the following sentences most logically completes the last paragraph?
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_17
[ "a social phenomenon", "a form of historical structuring", "an accidental circumstance", "a condition controllable to some extent by an individual", "a partial determinant of an individual's actions" ]
1
The passage states that a contingency could be each of the following EXCEPT:
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_18
[ "In a report on the enactment of a bill into law, a journalist explains why the need for the bill arose, sketches the biography of the principal legislator who wrote the bill, and ponders the effect that the bill's enactment will have both on society and on the legislator's career.", "In a consultation with a patient, a doctor reviews the patient's medical history, suggests possible reasons for the patient's current condition, and recommends steps that the patient should take in the future to ensure that the condition improves or at least does not get any worse.", "In an analysis of a historical novel, a critic provides information to support the claim that details of the work's setting are accurate, explains why the subject of the novel was of particular interest to the author, and compares the novel with some of the author's other books set in the same period.", "In a presentation to stockholders, a corporation's chief executive officer describes the corporation's most profitable activities during the past year, introduces the vice president largely responsible for those activities, and discusses new projects the vice president will initiate in the coming year.", "In developing a film based on a historical event, a filmmaker conducts interviews with participants in the event, bases part of the film's screenplay on the interviews, and concludes the screenplay with a sequence of scenes speculating on the outcome of the event had certain details been different." ]
0
Which one of the following is most analogous to the ideal work of a historical sociologist as outlined by Abrams?
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_19
[ "outline the merits of Abrams's conception of historical sociology", "convey the details of Abrams's conception of historical sociology", "anticipate challenges to Abrams's conception of historical sociology", "examine the roles of key terms used in Abrams's conception of historical sociology", "identify the basis of Abrams's conception of historical sociology" ]
4
The primary function of the first paragraph of the passage is to
In explaining the foundations of the discipline known as historical sociology—the examination of history using the methods of sociology—historical sociologist Philip Abrams argues that, while people are made by society as much as society is made by people, sociologists' approach to the subject is usually to focus on only one of these forms of influence to the exclusion of the other. Abrams insists on the necessity for sociologists to move beyond these one-sided approaches to understand society as an entity constructed by individuals who are at the same time constructed by their society. Abrams refers to this continuous process as "structuring." Abrams also sees history as the result of structuring. People, both individually and as members of collectives, make history. But our making of history is itself formed and informed not only by the historical conditions we inherit from the past, but also by the prior formation of our own identities and capacities, which are shaped by what Abrams calls "contingencies" —social phenomena over which we have varying degrees of control. Contingencies include such things as the social conditions under which we come of age, the condition of our household's economy, the ideologies available to help us make sense of our situation, and accidental circumstances. The ways in which contingencies affect our individual or group identities create a structure of forces within which we are able to act, and that partially determines the sorts of actions we are able to perform. In Abrams's analysis, historical structuring, like social structuring, is manifold and unremitting. To understand it, historical sociologists must extract from it certain significant episodes, or events, that their methodology can then analyze and interpret. According to Abrams, these events are points at which action and contingency meet, points that represent a cross section of the specific social and individual forces in play at a given time. At such moments, individuals stand forth as agents of history not simply because they possess a unique ability to act, but also because in them we see the force of the specific social conditions that allowed their actions to come forth. Individuals can "make their mark" on history, yet in individuals one also finds the convergence of wider social forces. In order to capture the various facets of this mutual interaction, Abrams recommends a fourfold structure to which he believes the investigations of historical sociologists should conform: first, description of the event itself; second, discussion of the social context that helped bring the event about and gave it significance; third, summary of the life history of the individual agent in the event; and fourth, analysis of the consequences of the event both for history and for the individual.
200210_3-RC_3_20
[ "the effect of the fact that a person experienced political injustice on that person's decision to work for political reform", "the effect of the fact that a person was raised in an agricultural region on that person's decision to pursue a career in agriculture", "the effect of the fact that a person lives in a particular community on that person's decision to visit friends in another community", "the effect of the fact that a person's parents practiced a particular religion on that person's decision to practice that religion", "the effect of the fact that a person grew up in financial hardship on that person's decision to help others in financial hardship" ]
2
Based on the passage, which one of the following is the LEAST illustrative example of the effect of a contingency upon an individual?
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_21
[ "Training in ethics that incorporates narrative literature would better cultivate flexible ethical thinking and increase medical students' capacity for empathetic patient care as compared with the traditional approach of medical schools to such training.", "Traditional abstract ethical training, because it is too heavily focused on theoretical reasoning, tends to decrease or impair the medical student's sensitivity to modern ethical dilemmas.", "Only a properly designed curriculum that balances situational, abstract, and narrative approaches to ethics will adequately prepare the medical student for complex ethical confrontations involving actual patients.", "Narrative-based instruction in ethics is becoming increasingly popular in medical schools because it requires students to develop a capacity for empathy by examining complex moral issues from a variety of perspectives.", "The study of narrative literature in medical schools would nurture moral intuition, enabling the future doctor to make ethical decisions without appeal to general principles." ]
0
Which one of the following most accurately states the main point of the passage?
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_22
[ "a sense of curiosity, aroused by reading, that leads one to follow actively the development of problems involving the characters depicted in narratives", "a faculty of seeking out and recognizing the ethical controversies involved in human relationships and identifying oneself with one side or another in such controversies", "a capacity to understand the complexities of various ethical dilemmas and to fashion creative and innovative solutions to them", "an ability to understand personal aspects of ethically significant situations even if one is not a direct participant and to empathize with those involved in them", "an ability to act upon ethical principles different from one's own for the sake of variety" ]
3
Which one of the following most accurately represents the author's use of the term "moral imagination" in line 38?
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_23
[ "The heavy load of technical coursework in today's medical schools often keeps them from giving adequate emphasis to courses in medical ethics.", "Students learn more about ethics through the use of fiction than through the use of nonfictional readings.", "The traditional method of ethical training in medical schools should be supplemented or replaced by more direct practical experience with real-life patients in ethically difficult situations.", "The failings of an abstract, philosophical training in ethics can be remedied only by replacing it with a purely narrative-based approach.", "Neither scientific training nor traditional philosophical ethics adequately prepares doctors to deal with the emotional dimension of patients' needs." ]
4
It can be inferred from the passage that the author would most likely agree with which one of the following statements?
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_24
[ "to advise medical schools on how to implement a narrative-based approach to ethics in their curricula", "to argue that the current methods of ethics education are counterproductive to the formation of empathetic doctor-patient relationships", "to argue that the ethical content of narrative literature foreshadows the pitfalls of situational ethics", "to propose an approach to ethical training in medical school that will preserve the human dimension of medicine", "to demonstrate the value of a well-designed ethics education for medical students" ]
3
Which one of the following is most likely the author's overall purpose in the passage?
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_25
[ "It tends to avoid the extreme relativism of situational ethics.", "It connects students to varied types of human events.", "It can help lead medical students to develop new ways of dealing with patients.", "It requires students to examine moral issues from new perspectives.", "It can help insulate future doctors from the shock of the ethical dilemmas they will confront." ]
4
The passage ascribes each of the following characteristics to the use of narrative literature in ethical education EXCEPT:
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_26
[ "Doctors face a variety of such dilemmas.", "Purely scientific thinking is inadequate for dealing with modern ethical dilemmas.", "Such dilemmas are more prevalent today as a result of scientific and technological advances in medicine.", "Theorizing about ethics does little to prepare students to face such dilemmas.", "Narrative literature can help make medical students ready to face such dilemmas." ]
2
With regard to ethical dilemmas, the passage explicitly states each of the following EXCEPT:
One of the greatest challenges facing medical students today, apart from absorbing volumes of technical information and learning habits of scientific thought, is that of remaining empathetic to the needs of patients in the face of all this rigorous training. Requiring students to immerse themselves completely in medical coursework risks disconnecting them from the personal and ethical aspects of doctoring, and such strictly scientific thinking is insufficient for grappling with modern ethical dilemmas. For these reasons, aspiring physicians need to develop new ways of thinking about and interacting with patients. Training in ethics that takes narrative literature as its primary subject is one method of accomplishing this. Although training in ethics is currently provided by medical schools, this training relies heavily on an abstract, philosophical view of ethics. Although the conceptual clarity provided by a traditional ethics course can be valuable, theorizing about ethics contributes little to the understanding of everyday human experience or to preparing medical students for the multifarious ethical dilemmas they will face as physicians. A true foundation in ethics must be predicated on an understanding of human behavior that reflects a wide array of relationships and readily adapts to various perspectives, for this is what is required to develop empathy. Ethics courses drawing on narrative literature can better help students prepare for ethical dilemmas precisely because such literature attaches its readers so forcefully to the concrete and varied world of human events. The act of reading narrative literature is uniquely suited to the development of what might be called flexible ethical thinking. To grasp the development of characters, to tangle with heightening moral crises, and to engage oneself with the story not as one's own but nevertheless as something recognizable and worthy of attention, readers must use their moral imagination. Giving oneself over to the ethical conflicts in a story requires the abandonment of strictly absolute, inviolate sets of moral principles. Reading literature also demands that the reader adopt another person's point of view—that of the narrator or a character in a story— and thus requires the ability to depart from one's personal ethical stance and examine moral issues from new perspectives. It does not follow that readers, including medical professionals, must relinquish all moral principles, as is the case with situational ethics, in which decisions about ethical choices are made on the basis of intuition and are entirely relative to the circumstances in which they arise. Such an extremely relativistic stance would have as little benefit for the patient or physician as would a dogmatically absolutist one. Fortunately, the incorporation of narrative literature into the study of ethics, while serving as a corrective to the latter stance, need not lead to the former. But it can give us something that is lacking in the traditional philosophical study of ethics—namely, a deeper understanding of human nature that can serve as a foundation for ethical reasoning and allow greater flexibility in the application of moral principles.
200210_3-RC_4_27
[ "unqualified disapproval of the method and disapproval of all of its effects", "reserved judgment regarding the method and disapproval of all of its effects", "partial disapproval of the method and clinical indifference toward its effects", "partial approval of the method and disapproval of all of its effects", "partial disapproval of the method and approval of some of its effects" ]
4
The author's attitude regarding the traditional method of teaching ethics in medical school can most accurately be described as
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_1
[ "Muralism developed its political goals in Mexico in service to the revolutionary government, while its aesthetic aspects were borrowed from other countries.", "Inspired by political developments in Mexico and trends in modern art, muralist painters devised an innovative style of large-scale painting to reflect Mexican culture.", "The stylistic features of muralism represent a consistent working out of the implications of its revolutionary ideology.", "Though the Mexican government supported muralism as a means of promoting nationalist ideology, muralists such as Siqueiros, Rivera, and Orozco developed the movement in contradictory, more controversial directions.", "Because of its large scale and stylistic innovations, the type of contemporary Mexican art known as muralism is capable of expressing a much wider and more complex view of Mexico's culture and history than previous artistic movements could express." ]
1
Which one of the following most accurately expresses the main point of the passage?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_2
[ "assimilation of elements of Mexican customs and myth", "movement beyond single, centralized subjects", "experimentation with expressionist techniques", "distinctive manner of artistic expression", "underlying resistance to change" ]
3
The author mentions Rivera's use of "pre-Columbian sculpture and the Italian Renaissance fresco" (lines 36–37) primarily in order to provide an example of Rivera's
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_3
[ "its revolutionary ideology", "its use of brilliant color", "its tailoring of style to its medium", "its use of elements from everyday life", "its expression of populist ideas" ]
2
Which one of the following aspects of muralist painting does the author appear to value most highly?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_4
[ "Art should be evaluated on the basis of its style and form rather than on its content.", "Government sponsorship is essential to the flourishing of art.", "Realism is unsuited to large-scale public art.", "The use of techniques borrowed from other cultures can contribute to the rediscovery of one's national identity.", "Traditional easel painting is an elitist art form." ]
3
Based on the passage, with which one of the following statements about art would the muralists be most likely to agree?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_5
[ "It encouraged the adoption of modern innovations from abroad.", "It encouraged artists to pursue the realist tradition in art.", "It called on artists to portray Mexico's heritage and future promise.", "It developed the theoretical base of the muralist movement.", "It favored artists who introduced stylistic innovations over those who worked in the realist tradition." ]
2
According to the passage, the Mexican government elected in 1920 took which one of the following approaches to art following the Mexican Revolution?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_6
[ "The major figures in muralism also created important works in that style that were deliberately not political in content.", "Not all muralist painters were familiar with the innovations being made at that time in the art world.", "The changes taking place at that time in the art world were revivals of earlier movements.", "Officials in the Mexican government were not familiar with the innovations being made at that time in the art world.", "Only those muralist works that reflected nationalist sentiments were permitted to be viewed by the public." ]
0
Which one of the following, if true, most supports the author's claim about the relationship between muralism and the Mexican Revolution (lines 24–27)?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_7
[ "Its subject matter consisted primarily of current events.", "It could be viewed outdoors only.", "It used the same techniques as are used in easel painting.", "It exhibited remarkable stylistic uniformity.", "It was intended to be viewed from more than one angle." ]
4
Which one of the following does the author explicitly identify as a characteristic of Mexican mural art?
The contemporary Mexican artistic movement known as muralism, a movement of public art that began with images painted on walls in an effort to represent Mexican national culture, is closely linked ideologically with its main sponsor, the new Mexican government elected in 1920 following the Mexican Revolution. This government promoted an ambitious cultural program, and the young revolutionary state called on artists to display Mexico's richness and possibility. But the theoretical foundation of the movement was formulated by the artists themselves. The major figures in the muralist movement, David Alfaro Siqueiros, Diego Rivera, and José Clemente Orozco, all based their work on a common premise: that art should incorporate images and familiar ideas as it commented upon the historic period in which it was created. In the process, they assimilated into their work the customs, myths, geography, and history of the local communities that constitute the basis of Mexican national culture. But while many muralist works express populist or nationalist ideas, it is a mistake to attempt to reduce Mexican mural painting to formulaic, official government art. It is more than merely the result of the changes in political and social awareness that the Mexican Revolution represented; it also reflected important innovations in the art world. In creating a wide panorama of Mexico's history on the walls of public buildings throughout the country, muralists often used a realist style. But awareness of these innovations enabled them to be freer in expression than were more traditional practitioners of this style. Moreover, while they shared a common interest in rediscovering their Mexican national identity, they developed their own distinct styles. Rivera, for example, incorporated elements from pre-Columbian sculpture and the Italian Renaissance fresco into his murals and used a strange combination of mechanical shapes to depict the faces and bodies of people. Orozco, on the other hand, showed a more expressionist approach, with loose brushwork and an openly emotional treatment of form. He relied on a strong diagonal line to give a sense of heightened movement and drama to his work. Siqueiros developed in a somewhat similar direction as Orozco, but incorporated asymmetric compositions, a high degree of action, and brilliant color. This stylistic experimentation can be seen as resulting from the demands of a new medium. In stretching their concepts from small easel paintings with a centralized subject to vast compositions with mural dimensions, muralists learned to think big and to respect the sweeping gesture of the arm—the brush stroke required to achieve the desired bold effect of mural art. Furthermore, because they were painting murals, they thought in terms of a continuum; their works were designed to be viewable from many different vantage points, to have an equally strong impact in all parts, and to continue to be viewable as people moved across in front of them.
200212_3-RC_1_8
[ "describe the unifying features of muralism", "provide support for the argument that the muralists often did not support government causes", "support the claim that muralists always used their work to comment on their own historical period", "illustrate how the muralists appropriated elements of Mexican tradition", "argue that muralism cannot be understood by focusing solely on its political dimension" ]
4
The primary purpose of the second paragraph is to
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_9
[ "While originally written for children, fairy tales also contain a deeper significance for adults that psychologists such as Bettelheim have shown to be their true meaning.", "The \"superficial\" reading of a fairy tale, which deals only with the tale's content, is actually more enlightening for children than the \"deeper\" reading preferred by psychologists such as Bettelheim.", "Because the content of fairy tales has historically run counter to prevailing orthodoxies about child-rearing, psychologists such as Bettelheim sometimes reinterpret them to suit their own pedagogical needs.", "The pervasive need to deny adult evil has led psychologists such as Bettelheim to erroneously view fairy tales solely as instruments of moral instruction for children.", "Although dismissed as unproductive by psychologists such as Bettelheim, fairy tales offer children imaginative experiences that help them grow into morally responsible adults." ]
3
Which one of the following most accurately states the main idea of the passage?
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_10
[ "Hansel and Gretel are abandoned by their hard-hearted parents.", "Hansel and Gretel are imprisoned by the witch.", "Hansel and Gretel overpower the witch.", "Hansel and Gretel take the witch's jewels.", "Hansel and Gretel bring the witch's jewels home to their parents." ]
0
Based on the passage, which one of the following elements of "Hansel and Gretel" would most likely be de-emphasized in Bettelheim's interpretation of the tale?
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_11
[ "concern that the view will undermine the ability of fairy tales to provide moral instruction", "scorn toward the view's supposition that moral tenets can be universally valid", "disapproval of the view's depiction of children as selfish and adults as innocent", "anger toward the view's claim that children often improve as a result of deserved punishment", "disappointment with the view's emphasis on the manifest content of a tale" ]
2
Which one of the following is the most accurate description of the author's attitude toward Bettelheim's view of fairy tales?
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_12
[ "Children who never attempt to look for the deeper meanings in fairy tales will miss out on one of the principal pleasures of reading such tales.", "It is better if children discover fairy tales on their own than for an adult to suggest that they read the tales.", "A child who is unruly will behave better after reading a fairy tale if the tale is suggested to them by another child.", "Most children are too young to comprehend the deeper meanings contained in fairy tales.", "Children should be allowed to enjoy literature that has no instructive purpose." ]
4
The author of the passage would be most likely to agree with which one of the following statements?
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_13
[ "Only those trained in literary interpretation can detect the latent meanings in stories.", "Only adults are psychologically mature enough to find the latent meanings in stories.", "Only one of the various meanings readers may find in a story is truly correct.", "The meanings we see in stories are influenced by the assumptions and expectations we bring to the story.", "The latent meanings a story contains are deliberately placed there by the author." ]
3
Which one of the following principles most likely underlies the author's characterization of literary interpretation?
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_14
[ "the moral instruction children receive from fairy tales is detrimental to their emotional development", "fewer adults are guilty of improper child-rearing than had once been thought", "the need to deny adult evil is a pervasive feature of all modern societies", "the plots of many fairy tales are similar to children's revenge fantasies", "the idea that children are typically selfish and adults innocent is of questionable validity" ]
4
According to the author, recent psychoanalytic literature suggests that
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_15
[ "uninterested in inflexible tenets of moral instruction", "unfairly subjected to the moral beliefs of their parents", "often aware of inappropriate parental behavior", "capable of shedding undesirable personal qualities", "basically playful and carefree" ]
3
It can be inferred from the passage that Bettelheim believes that children are
Fairy tales address themselves to two communities, each with its own interests and each in periodic conflict with the other: parents and children. Nearly every study of fairy tales has taken the perspective of the parent, constructing the meaning of the tales by using the reading strategies of an adult bent on identifying universally valid tenets of moral instruction for children. For example, the plot of "Hansel and Gretel" is set in motion by hard-hearted parents who abandon their children in the woods, but for psychologist Bruno Bettelheim the tale is really about children who learn to give up their unhealthy dependency on their parents. According to Bettelheim, this story—in which the children ultimately overpower a witch who has taken them prisoner for the crime of attempting to eat the witch's gingerbread house—forces its young audience to recognize the dangers of unrestrained greed. As dependent children, Bettelheim argues, Hansel and Gretel had been a burden to their parents, but on their return home with the witch's jewels, they become the family's support. Thus, says Bettelheim, does the story train its young listeners to become "mature children." There are two ways of interpreting a story: one is a "superficial" reading that focuses on the tale's manifest content, and the other is a "deeper" reading that looks for latent meanings. Many adults who read fairy tales are drawn to this second kind of interpretation in order to avoid facing the unpleasant truths that can emerge from the tales when adults—even parents—are portrayed as capable of acting out of selfish motives themselves. What makes fairy tales attractive to Bettelheim and other psychologists is that they can be used as scenarios that position the child as a transgressor whose deserved punishment provides a lesson for unruly children. Stories that run counter to such orthodoxies about child-rearing are, to a large extent, suppressed by Bettelheim or "rewritten" through reinterpretation. Once we examine his interpretations closely, we see that his readings produce meanings that are very different from those constructed by readers with different cultural assumptions and expectations, who, unlike Bettelheim, do not find inflexible tenets of moral instruction in the tales. Bettelheim interprets all fairy tales as driven by children's fantasies of desire and revenge, and in doing so suppresses the true nature of parental behavior ranging from abuse to indulgence. Fortunately, these characterizations of selfish children and innocent adults have been discredited to some extent by recent psychoanalytic literature. The need to deny adult evil has been a pervasive feature of our society, leading us to position children not only as the sole agents of evil but also as the objects of unending moral instruction, hence the idea that a literature targeted for them must stand in the service of pragmatic instrumentality rather than foster an unproductive form of playful pleasure.
200212_3-RC_2_16
[ "The imaginations of children do not draw clear distinctions between inanimate objects and living things.", "Children must learn that their own needs and feelings are to be valued, even when these differ from those of their parents.", "As their minds mature, children tend to experience the world in terms of the dynamics of the family into which they were born.", "The more secure that children feel within the world, the less they need to hold onto infantile notions.", "Children's ability to distinguish between stories and reality is not fully developed until puberty." ]
1
Which one of the following statements is least compatible with Bettelheim's views, as those views are described in the passage?
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_17
[ "If classical wave theorists had never focused on blackbody radiation, Planck's insights would not have developed and the stage would not have been set for Einstein.", "Classical wave theory, an incorrect formulation of the nature of radiation, was corrected by Planck and other physicists after Planck performed experiments that demonstrated that radiation exists as particles.", "Planck's new model of radiation, though numerically consistent with observed data, was slow to win the support of the scientific community, which was critical of his ideas.", "Prompted by new experimental findings, Planck discarded an assumption of classical wave theory and proposed a picture of radiation that matched experimental results and was further supported by theoretical justification.", "At the turn of the century, Planck and Einstein revolutionized studies in radiation by modifying classical wave theory in response to experimental results that suggested the energy of radiation is less at short wavelengths than at long ones." ]
3
Which one of the following most accurately states the main point of the passage?
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_18
[ "radio waves", "black velvet or soot", "microscopic particles", "metal surfaces", "radio volume dials" ]
4
Which one of the following does the author use to illustrate the difference between continuous energies and discrete energies?
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_19
[ "Radiation reflected by and radiation emitted by an object are difficult to distinguish from one another.", "Any object in a dark room is a nearly ideal blackbody object.", "All blackbody objects of comparable size give off radiation at approximately the same wavelengths regardless of the objects' temperatures.", "Any blackbody object whose temperature is difficult to manipulate would be of little use in an experiment.", "Thermal radiation cannot originate from a blackbody object." ]
0
Which one of the following can most clearly be inferred from the description of blackbody objects in the second paragraph?
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_20
[ "strong admiration for the intuitive leap that led to a restored confidence in wave theory's picture of atomic processes", "mild surprise at the bizarre position Planck took regarding atomic processes", "reasoned skepticism of Planck's lack of scientific justification for his hypothesis", "legitimate concern that the hypothesis would have been abandoned without the further studies of Einstein and others", "scholarly interest in a step that led to a more accurate picture of atomic processes" ]
4
The author's attitude toward Planck's development of a new hypothesis about atomic processes can most aptly be described as
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_21
[ "What did Planck's hypothesis about atomic processes try to account for?", "What led to the scientific community's acceptance of Planck's ideas?", "Roughly when did the blackbody radiation experiments take place?", "What contributions did Planck make to classical wave theory?", "What type of experiment led Einstein to formulate a theory regarding the composition of radiation?" ]
3
The passage provides information that answers each of the following questions EXCEPT:
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_22
[ "describe the process by which one theory's assumption was dismantled by a competing theory", "introduce a central assumption of a scientific theory and the experimental evidence that led to the overthrowing of that theory", "explain two competing theories that are based on the same experimental evidence", "describe the process of retesting a theory in light of ambiguous experimental results", "provide the basis for an argument intended to dismiss a new theory" ]
1
The primary function of the first two paragraphs of the passage is to
With the approach of the twentieth century, the classical wave theory of radiation—a widely accepted theory in physics—began to encounter obstacles. This theory held that all electromagnetic radiation—the entire spectrum from gamma and X rays to radio frequencies, including heat and light—exists in the form of waves. One fundamental assumption of wave theory was that as the length of a wave of radiation shortens, its energy increases smoothly—like a volume dial on a radio that adjusts smoothly to any setting— and that any conceivable energy value could thus occur in nature. The major challenge to wave theory was the behavior of thermal radiation, the radiation emitted by an object due to the object's temperature, commonly called "blackbody" radiation because experiments aimed at measuring it require objects, such as black velvet or soot, with little or no reflective capability. Physicists can monitor the radiation coming from a blackbody object and be confident that they are observing its thermal radiation and not simply reflected radiation that has originated elsewhere. Employing the principles of wave theory, physicists originally predicted that blackbody objects radiated much more at short wavelengths, such as ultraviolet, than at long wavelengths. However, physicists using advanced experimental techniques near the turn of the century did not find the predicted amount of radiation at short wavelengths—in fact, they found almost none, a result that became known among wave theorists as the "ultraviolet catastrophe." Max Planck, a classical physicist who had made important contributions to wave theory, developed a hypothesis about atomic processes taking place in a blackbody object that broke with wave theory and accounted for the observed patterns of blackbody radiation. Planck discarded the assumption of radiation's smooth energy continuum and took the then bizarre position that these atomic processes could only involve discrete energies that jump between certain units of value—like a volume dial that "clicks" between incremental settings—and he thereby obtained numbers that perfectly fit the earlier experimental result. This directly opposed wave theory's picture of atomic processes, and the physics community was at first quite critical of Planck's hypothesis, in part because he presented it without physical explanation. Soon thereafter, however, Albert Einstein and other physicists provided theoretical justification for Planck's hypothesis. They found that upon being hit with part of the radiation spectrum, metal surfaces give off energy at values that are discontinuous. Further, they noted a threshold along the spectrum beyond which no energy is emitted by the metal. Einstein theorized, and later found evidence to confirm, that radiation is composed of particles, now called photons, which can be emitted only in discrete units and at certain wavelengths, in accordance with Planck's speculations. So in just a few years, what was considered a catastrophe generated a new vision in physics that led to theories still in place today.
200212_3-RC_3_23
[ "discussing the value of speculation in a scientific discipline", "summarizing the reasons for the rejection of an established theory by the scientific community", "describing the role that experimental research plays in a scientific discipline", "examining a critical stage in the evolution of theories concerning the nature of a physical phenomenon", "comparing the various assumptions that lie at the foundation of a scientific discipline" ]
3
The passage is primarily concerned with
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator.
200212_3-RC_4_24
[ "Despite the widely recognized need to revise Canadian copyright law to protect works from unauthorized reproduction and distribution over the Internet, users of the Internet have mounted many legal challenges to the criminalizing of digitalization.", "Although the necessity of revising Canadian copyright law to protect works from unauthorized reproduction and distribution over the Internet is widely recognized, effective criminalizing of digitalization is likely to prove highly complicated.", "While the unauthorized reproduction and distribution of copyrighted works over the Internet is not yet a crime, legal experts believe it is only a matter of time before Canadian copyright law is amended to prohibit unauthorized digitalization.", "Despite the fact that current Canadian copyright law does not cover digitalization, the unauthorized reproduction and distribution of copyrighted works over the Internet clearly ought to be considered a crime.", "Although legal experts in Canada disagree about the most effective way to punish the unauthorized reproduction and distribution of copyrighted works over the Internet, they nonetheless agree that such digitalization should clearly be a punishable crime." ]
1
Which one of the following most accurately expresses the main point of the passage?
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator.
200212_3-RC_4_25
[ "Digitalization of copyrighted works is permitted to Internet users who pay a small fee to copyright holders.", "Digitalization of copyrighted works is prohibited to Internet users who are not academics.", "Digitalization of copyrighted works is permitted to all Internet users without restriction.", "Digitalization of copyrighted works is prohibited to all Internet users without exception.", "Digitalization of copyrighted works is permitted to Internet users engaged in research." ]
0
Given the author's argument, which one of the following additions to current Canadian copyright law would most likely be an agreeable compromise to both the Internet community and the publishing community?
The following passage was written in the mid-1990s. Users of the Internet—the worldwide network of interconnected computer systems—envision it as a way for people to have free access to information via their personal computers. Most Internet communication consists of sending electronic mail or exchanging ideas on electronic bulletin boards; however, a growing number of transmissions are of copyrighted works— books, photographs, videos and films, and sound recordings. In Canada, as elsewhere, the goals of Internet users have begun to conflict with reality as copyright holders look for ways to protect their material from unauthorized and uncompensated distribution. Copyright experts say that Canadian copyright law, which was revised in 1987 to cover works such as choreography and photography, has not kept pace with technology—specifically with digitalization, the conversion of data into a series of digits that are transmitted as electronic signals over computer networks. Digitalization makes it possible to create an unlimited number of copies of a book, recording, or movie and distribute them to millions of people around the world. Current law prohibits unauthorized parties from reproducing a work or any substantial part of it in any material form (e.g., photocopies of books or pirated audiotapes), but because digitalization merely transforms the work into electronic signals in a computer's memory, it is not clear whether digitalization constitutes a material reproduction—and so unauthorized digitalization is not yet technically a crime. Some experts propose simply adding unauthorized digitalization to the list of activities proscribed under current law, to make it clear that copyright holders own electronic reproduction rights just as they own rights to other types of reproduction. But criminalizing digitalization raises a host of questions. For example, given that digitalization allows the multiple recipients of a transmission to re-create copies of a work, would only the act of digitalization itself be criminal, or should each copy made from the transmission be considered a separate instance of piracy—even though those who made the copies never had access to the original? In addition, laws against digitalization might be virtually unenforceable given that an estimated 20 million people around the world have access to the Internet, and that copying and distributing material is a relatively simple process. Furthermore, even an expanded law might not cover the majority of transmissions, given the vast numbers of users who are academics and the fact that current copyright law allows generous exemptions for those engaged in private study or research. But even if the law is revised to contain a more sophisticated treatment of digitalization, most experts think it will be hard to resolve the clash between the Internet community, which is accustomed to treating information as raw material available for everyone to use, and the publishing community, which is accustomed to treating it as a commodity owned by its creator.
200212_3-RC_4_26
[ "how copyright infringement of protected works is punished under current Canadian copyright law", "why current Canadian copyright law is not easily applicable to digitalization", "how the Internet has caused copyright holders to look for new forms of legal protection", "why copyright experts propose protecting copyrighted works from unauthorized digitalization", "how unauthorized reproductions of copyrighted works are transmitted over the Internet" ]
1
The discussion in the second paragraph is intended primarily to explain which one of the following?