uid
stringlengths
4
7
premise
stringlengths
19
9.21k
hypothesis
stringlengths
13
488
label
stringclasses
3 values
id_0
100 Years of the Western Workplace Conditions in the working environment of Western countries changed significantly over the 20th century. Though not without some associated problems, these changes may be viewed generally as positive: child labour all but ceased, wages rose, the number of working hours in a week decreased, pension policies became standard, fringe benefits multiplied and concerns over health and safety issues were enforced. The collection of data relating to work conditions also became a far more exact science. In particular, there were important developments in methodology and data gathering. Additionally, there was a major expansion of the data collection effort more people became involved in learning about the workplace; and, for the first time, results started to be published. This being the case, at the end of the century, not only were most workers better off than their early 20th century predecessors had been, but they were also in a position to understand how and why this was the case. By carefully analyzing the statistical data made available, specific changes in the workplace not least regarding the concept of what work should involve became clearly discernible. The most obvious changes to the workplace involved the size and composition of the countries workforces. Registering only 24 million in 1900 (and including labourers of age ten and up) and 139 million (aged 16 and older), the size of Americas workforce, for instance, increased by almost six-fold in line with its overall population growth. At the same time, the composition of the workforce shifted from industries dominated by primary production occupations, such as farmers and foresters, to those dominated by professional, technical and, in particular, service workers. At the beginning of the 20th century, 38% of all American workers were employed on farms, by the end of the same century, that figure had fallen to less than 3 %. In Europe, much the same process occurred. In the 1930s, in every European country, bar Britain and Belgium, more than 20 per cent of the population worked in agriculture. By the 1980s, however, the farming populations of all developed countries, excluding Eastern Europe, had dropped to ten per cent and often even lower. At the same time, capital intensive farming using highly mechanized techniques dramatically reduced the numbers needed to farm there. And therein lay the problem. While the workplace became a safer and more productive environment, a world away from the harsh working conditions of our forefathers, the switch from an agricultural to a modern working environment also created massive unemployment in many countries. Fundamental to this problem was the widespread move from the countryside to the city. Having lost their livelihoods, the worlds peasant populations amassed in ever larger numbers in already crowded communities, where rates of job growth failed to keep up with internal migration. As a result, thousands were left squatting in shanty towns on the periphery of cities, waiting for jobs that might never arrive. While this was (and is) particularly true of Third World countries, the same phenomenon could also be witnessed in several American, French, English and German cities in the late 20th century. From a different and more positive perspective, in the 20th century, women became visible and active members of all sectors of the Western workplace. In 1900, only 19% of European women of working age participated in the labour force; by 1999, this figure had risen to 60%. In 1900, only 1% of the countrys lawyers and 6% of its physicians were female; by contrast, the figures were 29% and 24% in 1999. A recent survey of French teenagers, both male and female, revealed that over 50% of those polled thought that, in any job (bar those involving military service), women make better employees, as they are less likely to become riled under stress and less overtly competitive than men. The last and perhaps most significant change to the 20th-century workplace involved the introduction of technology. The list of technological improvements in the workplace is endless: communication and measuring devices, computers of all shapes and sizes, x-ray, lasers, neon lights, stainless steel, and so on and on. Such improvements led to a more productive, safer work environment. Moreover, the fact that medicine improved so dramatically led to an increase in the average lifespan among Western populations. In turn, workers of very different ages were able to work shoulder to shoulder, and continue in their jobs far longer. By the end of 20th century, the Western workplace had undergone remarkable changes. In general, both men and women worked fewer hours per day for more years under better conditions. Yet, the power of agriculture had waned as farmers and foresters moved to cities to earn greater salaries as annalists and accountants. For those who could not make this transition, however, life at the dawn of the new century seemed less appealing.
Improvements in medicine led to workers earning more over a longer period.
neutral
id_1
100 Years of the Western Workplace Conditions in the working environment of Western countries changed significantly over the 20th century. Though not without some associated problems, these changes may be viewed generally as positive: child labour all but ceased, wages rose, the number of working hours in a week decreased, pension policies became standard, fringe benefits multiplied and concerns over health and safety issues were enforced. The collection of data relating to work conditions also became a far more exact science. In particular, there were important developments in methodology and data gathering. Additionally, there was a major expansion of the data collection effort more people became involved in learning about the workplace; and, for the first time, results started to be published. This being the case, at the end of the century, not only were most workers better off than their early 20th century predecessors had been, but they were also in a position to understand how and why this was the case. By carefully analyzing the statistical data made available, specific changes in the workplace not least regarding the concept of what work should involve became clearly discernible. The most obvious changes to the workplace involved the size and composition of the countries workforces. Registering only 24 million in 1900 (and including labourers of age ten and up) and 139 million (aged 16 and older), the size of Americas workforce, for instance, increased by almost six-fold in line with its overall population growth. At the same time, the composition of the workforce shifted from industries dominated by primary production occupations, such as farmers and foresters, to those dominated by professional, technical and, in particular, service workers. At the beginning of the 20th century, 38% of all American workers were employed on farms, by the end of the same century, that figure had fallen to less than 3 %. In Europe, much the same process occurred. In the 1930s, in every European country, bar Britain and Belgium, more than 20 per cent of the population worked in agriculture. By the 1980s, however, the farming populations of all developed countries, excluding Eastern Europe, had dropped to ten per cent and often even lower. At the same time, capital intensive farming using highly mechanized techniques dramatically reduced the numbers needed to farm there. And therein lay the problem. While the workplace became a safer and more productive environment, a world away from the harsh working conditions of our forefathers, the switch from an agricultural to a modern working environment also created massive unemployment in many countries. Fundamental to this problem was the widespread move from the countryside to the city. Having lost their livelihoods, the worlds peasant populations amassed in ever larger numbers in already crowded communities, where rates of job growth failed to keep up with internal migration. As a result, thousands were left squatting in shanty towns on the periphery of cities, waiting for jobs that might never arrive. While this was (and is) particularly true of Third World countries, the same phenomenon could also be witnessed in several American, French, English and German cities in the late 20th century. From a different and more positive perspective, in the 20th century, women became visible and active members of all sectors of the Western workplace. In 1900, only 19% of European women of working age participated in the labour force; by 1999, this figure had risen to 60%. In 1900, only 1% of the countrys lawyers and 6% of its physicians were female; by contrast, the figures were 29% and 24% in 1999. A recent survey of French teenagers, both male and female, revealed that over 50% of those polled thought that, in any job (bar those involving military service), women make better employees, as they are less likely to become riled under stress and less overtly competitive than men. The last and perhaps most significant change to the 20th-century workplace involved the introduction of technology. The list of technological improvements in the workplace is endless: communication and measuring devices, computers of all shapes and sizes, x-ray, lasers, neon lights, stainless steel, and so on and on. Such improvements led to a more productive, safer work environment. Moreover, the fact that medicine improved so dramatically led to an increase in the average lifespan among Western populations. In turn, workers of very different ages were able to work shoulder to shoulder, and continue in their jobs far longer. By the end of 20th century, the Western workplace had undergone remarkable changes. In general, both men and women worked fewer hours per day for more years under better conditions. Yet, the power of agriculture had waned as farmers and foresters moved to cities to earn greater salaries as annalists and accountants. For those who could not make this transition, however, life at the dawn of the new century seemed less appealing.
In 1900, 19% of North American women of working age participated in the workforce.
contradiction
id_2
100 Years of the Western Workplace Conditions in the working environment of Western countries changed significantly over the 20th century. Though not without some associated problems, these changes may be viewed generally as positive: child labour all but ceased, wages rose, the number of working hours in a week decreased, pension policies became standard, fringe benefits multiplied and concerns over health and safety issues were enforced. The collection of data relating to work conditions also became a far more exact science. In particular, there were important developments in methodology and data gathering. Additionally, there was a major expansion of the data collection effort more people became involved in learning about the workplace; and, for the first time, results started to be published. This being the case, at the end of the century, not only were most workers better off than their early 20th century predecessors had been, but they were also in a position to understand how and why this was the case. By carefully analyzing the statistical data made available, specific changes in the workplace not least regarding the concept of what work should involve became clearly discernible. The most obvious changes to the workplace involved the size and composition of the countries workforces. Registering only 24 million in 1900 (and including labourers of age ten and up) and 139 million (aged 16 and older), the size of Americas workforce, for instance, increased by almost six-fold in line with its overall population growth. At the same time, the composition of the workforce shifted from industries dominated by primary production occupations, such as farmers and foresters, to those dominated by professional, technical and, in particular, service workers. At the beginning of the 20th century, 38% of all American workers were employed on farms, by the end of the same century, that figure had fallen to less than 3 %. In Europe, much the same process occurred. In the 1930s, in every European country, bar Britain and Belgium, more than 20 per cent of the population worked in agriculture. By the 1980s, however, the farming populations of all developed countries, excluding Eastern Europe, had dropped to ten per cent and often even lower. At the same time, capital intensive farming using highly mechanized techniques dramatically reduced the numbers needed to farm there. And therein lay the problem. While the workplace became a safer and more productive environment, a world away from the harsh working conditions of our forefathers, the switch from an agricultural to a modern working environment also created massive unemployment in many countries. Fundamental to this problem was the widespread move from the countryside to the city. Having lost their livelihoods, the worlds peasant populations amassed in ever larger numbers in already crowded communities, where rates of job growth failed to keep up with internal migration. As a result, thousands were left squatting in shanty towns on the periphery of cities, waiting for jobs that might never arrive. While this was (and is) particularly true of Third World countries, the same phenomenon could also be witnessed in several American, French, English and German cities in the late 20th century. From a different and more positive perspective, in the 20th century, women became visible and active members of all sectors of the Western workplace. In 1900, only 19% of European women of working age participated in the labour force; by 1999, this figure had risen to 60%. In 1900, only 1% of the countrys lawyers and 6% of its physicians were female; by contrast, the figures were 29% and 24% in 1999. A recent survey of French teenagers, both male and female, revealed that over 50% of those polled thought that, in any job (bar those involving military service), women make better employees, as they are less likely to become riled under stress and less overtly competitive than men. The last and perhaps most significant change to the 20th-century workplace involved the introduction of technology. The list of technological improvements in the workplace is endless: communication and measuring devices, computers of all shapes and sizes, x-ray, lasers, neon lights, stainless steel, and so on and on. Such improvements led to a more productive, safer work environment. Moreover, the fact that medicine improved so dramatically led to an increase in the average lifespan among Western populations. In turn, workers of very different ages were able to work shoulder to shoulder, and continue in their jobs far longer. By the end of 20th century, the Western workplace had undergone remarkable changes. In general, both men and women worked fewer hours per day for more years under better conditions. Yet, the power of agriculture had waned as farmers and foresters moved to cities to earn greater salaries as annalists and accountants. For those who could not make this transition, however, life at the dawn of the new century seemed less appealing.
The appearance of shanty towns after farmers move into city areas occurred primarily in the Third World.
entailment
id_3
100 Years of the Western Workplace Conditions in the working environment of Western countries changed significantly over the 20th century. Though not without some associated problems, these changes may be viewed generally as positive: child labour all but ceased, wages rose, the number of working hours in a week decreased, pension policies became standard, fringe benefits multiplied and concerns over health and safety issues were enforced. The collection of data relating to work conditions also became a far more exact science. In particular, there were important developments in methodology and data gathering. Additionally, there was a major expansion of the data collection effort more people became involved in learning about the workplace; and, for the first time, results started to be published. This being the case, at the end of the century, not only were most workers better off than their early 20th century predecessors had been, but they were also in a position to understand how and why this was the case. By carefully analyzing the statistical data made available, specific changes in the workplace not least regarding the concept of what work should involve became clearly discernible. The most obvious changes to the workplace involved the size and composition of the countries workforces. Registering only 24 million in 1900 (and including labourers of age ten and up) and 139 million (aged 16 and older), the size of Americas workforce, for instance, increased by almost six-fold in line with its overall population growth. At the same time, the composition of the workforce shifted from industries dominated by primary production occupations, such as farmers and foresters, to those dominated by professional, technical and, in particular, service workers. At the beginning of the 20th century, 38% of all American workers were employed on farms, by the end of the same century, that figure had fallen to less than 3 %. In Europe, much the same process occurred. In the 1930s, in every European country, bar Britain and Belgium, more than 20 per cent of the population worked in agriculture. By the 1980s, however, the farming populations of all developed countries, excluding Eastern Europe, had dropped to ten per cent and often even lower. At the same time, capital intensive farming using highly mechanized techniques dramatically reduced the numbers needed to farm there. And therein lay the problem. While the workplace became a safer and more productive environment, a world away from the harsh working conditions of our forefathers, the switch from an agricultural to a modern working environment also created massive unemployment in many countries. Fundamental to this problem was the widespread move from the countryside to the city. Having lost their livelihoods, the worlds peasant populations amassed in ever larger numbers in already crowded communities, where rates of job growth failed to keep up with internal migration. As a result, thousands were left squatting in shanty towns on the periphery of cities, waiting for jobs that might never arrive. While this was (and is) particularly true of Third World countries, the same phenomenon could also be witnessed in several American, French, English and German cities in the late 20th century. From a different and more positive perspective, in the 20th century, women became visible and active members of all sectors of the Western workplace. In 1900, only 19% of European women of working age participated in the labour force; by 1999, this figure had risen to 60%. In 1900, only 1% of the countrys lawyers and 6% of its physicians were female; by contrast, the figures were 29% and 24% in 1999. A recent survey of French teenagers, both male and female, revealed that over 50% of those polled thought that, in any job (bar those involving military service), women make better employees, as they are less likely to become riled under stress and less overtly competitive than men. The last and perhaps most significant change to the 20th-century workplace involved the introduction of technology. The list of technological improvements in the workplace is endless: communication and measuring devices, computers of all shapes and sizes, x-ray, lasers, neon lights, stainless steel, and so on and on. Such improvements led to a more productive, safer work environment. Moreover, the fact that medicine improved so dramatically led to an increase in the average lifespan among Western populations. In turn, workers of very different ages were able to work shoulder to shoulder, and continue in their jobs far longer. By the end of 20th century, the Western workplace had undergone remarkable changes. In general, both men and women worked fewer hours per day for more years under better conditions. Yet, the power of agriculture had waned as farmers and foresters moved to cities to earn greater salaries as annalists and accountants. For those who could not make this transition, however, life at the dawn of the new century seemed less appealing.
America and Europe shared the same overall trends in terms of the development of the workplace over the last century.
neutral
id_4
100 Years of the Western Workplace Conditions in the working environment of Western countries changed significantly over the 20th century. Though not without some associated problems, these changes may be viewed generally as positive: child labour all but ceased, wages rose, the number of working hours in a week decreased, pension policies became standard, fringe benefits multiplied and concerns over health and safety issues were enforced. The collection of data relating to work conditions also became a far more exact science. In particular, there were important developments in methodology and data gathering. Additionally, there was a major expansion of the data collection effort more people became involved in learning about the workplace; and, for the first time, results started to be published. This being the case, at the end of the century, not only were most workers better off than their early 20th century predecessors had been, but they were also in a position to understand how and why this was the case. By carefully analyzing the statistical data made available, specific changes in the workplace not least regarding the concept of what work should involve became clearly discernible. The most obvious changes to the workplace involved the size and composition of the countries workforces. Registering only 24 million in 1900 (and including labourers of age ten and up) and 139 million (aged 16 and older), the size of Americas workforce, for instance, increased by almost six-fold in line with its overall population growth. At the same time, the composition of the workforce shifted from industries dominated by primary production occupations, such as farmers and foresters, to those dominated by professional, technical and, in particular, service workers. At the beginning of the 20th century, 38% of all American workers were employed on farms, by the end of the same century, that figure had fallen to less than 3 %. In Europe, much the same process occurred. In the 1930s, in every European country, bar Britain and Belgium, more than 20 per cent of the population worked in agriculture. By the 1980s, however, the farming populations of all developed countries, excluding Eastern Europe, had dropped to ten per cent and often even lower. At the same time, capital intensive farming using highly mechanized techniques dramatically reduced the numbers needed to farm there. And therein lay the problem. While the workplace became a safer and more productive environment, a world away from the harsh working conditions of our forefathers, the switch from an agricultural to a modern working environment also created massive unemployment in many countries. Fundamental to this problem was the widespread move from the countryside to the city. Having lost their livelihoods, the worlds peasant populations amassed in ever larger numbers in already crowded communities, where rates of job growth failed to keep up with internal migration. As a result, thousands were left squatting in shanty towns on the periphery of cities, waiting for jobs that might never arrive. While this was (and is) particularly true of Third World countries, the same phenomenon could also be witnessed in several American, French, English and German cities in the late 20th century. From a different and more positive perspective, in the 20th century, women became visible and active members of all sectors of the Western workplace. In 1900, only 19% of European women of working age participated in the labour force; by 1999, this figure had risen to 60%. In 1900, only 1% of the countrys lawyers and 6% of its physicians were female; by contrast, the figures were 29% and 24% in 1999. A recent survey of French teenagers, both male and female, revealed that over 50% of those polled thought that, in any job (bar those involving military service), women make better employees, as they are less likely to become riled under stress and less overtly competitive than men. The last and perhaps most significant change to the 20th-century workplace involved the introduction of technology. The list of technological improvements in the workplace is endless: communication and measuring devices, computers of all shapes and sizes, x-ray, lasers, neon lights, stainless steel, and so on and on. Such improvements led to a more productive, safer work environment. Moreover, the fact that medicine improved so dramatically led to an increase in the average lifespan among Western populations. In turn, workers of very different ages were able to work shoulder to shoulder, and continue in their jobs far longer. By the end of 20th century, the Western workplace had undergone remarkable changes. In general, both men and women worked fewer hours per day for more years under better conditions. Yet, the power of agriculture had waned as farmers and foresters moved to cities to earn greater salaries as annalists and accountants. For those who could not make this transition, however, life at the dawn of the new century seemed less appealing.
No significant drawbacks accompanied changes in the work environment during the 20th century.
contradiction
id_5
A 13-year-old boy, Gareth Jones, was taken to Downston Police Station on Saturday 11 June under the suspicion of shoplifting in a local superstore. Gareth Jones denies all of the charges made against him. It is also known that: Gareth is an orphan. The store security officer has a grudge against Gareth because he is the best friend of his son. Gareth is not shown on video recordings captured by the stores surveillance cameras. Two years ago Gareth was caught stealing police road traffic bollards. The store was extremely busy on Saturday 11 June. Gareth had not been given a receipt for the goods he had bought. Gareth was stopped after he had left the store.
The stores security officer had a motive for accusing Gareth of shoplifting.
entailment
id_6
A 13-year-old boy, Gareth Jones, was taken to Downston Police Station on Saturday 11 June under the suspicion of shoplifting in a local superstore. Gareth Jones denies all of the charges made against him. It is also known that: Gareth is an orphan. The store security officer has a grudge against Gareth because he is the best friend of his son. Gareth is not shown on video recordings captured by the stores surveillance cameras. Two years ago Gareth was caught stealing police road traffic bollards. The store was extremely busy on Saturday 11 June. Gareth had not been given a receipt for the goods he had bought. Gareth was stopped after he had left the store.
Gareth ran away on leaving the store. Verbal logical reasoning test
neutral
id_7
A 13-year-old boy, Gareth Jones, was taken to Downston Police Station on Saturday 11 June under the suspicion of shoplifting in a local superstore. Gareth Jones denies all of the charges made against him. It is also known that: Gareth is an orphan. The store security officer has a grudge against Gareth because he is the best friend of his son. Gareth is not shown on video recordings captured by the stores surveillance cameras. Two years ago Gareth was caught stealing police road traffic bollards. The store was extremely busy on Saturday 11 June. Gareth had not been given a receipt for the goods he had bought. Gareth was stopped after he had left the store.
Gareth had bought some goods at the store.
entailment
id_8
A 13-year-old boy, Gareth Jones, was taken to Downston Police Station on Saturday 11 June under the suspicion of shoplifting in a local superstore. Gareth Jones denies all of the charges made against him. It is also known that: Gareth is an orphan. The store security officer has a grudge against Gareth because he is the best friend of his son. Gareth is not shown on video recordings captured by the stores surveillance cameras. Two years ago Gareth was caught stealing police road traffic bollards. The store was extremely busy on Saturday 11 June. Gareth had not been given a receipt for the goods he had bought. Gareth was stopped after he had left the store.
This was Gareths first offence.
contradiction
id_9
A 13-year-old boy, Gareth Jones, was taken to Downston Police Station on Saturday 11 June under the suspicion of shoplifting in a local superstore. Gareth Jones denies all of the charges made against him. It is also known that: Gareth is an orphan. The store security officer has a grudge against Gareth because he is the best friend of his son. Gareth is not shown on video recordings captured by the stores surveillance cameras. Two years ago Gareth was caught stealing police road traffic bollards. The store was extremely busy on Saturday 11 June. Gareth had not been given a receipt for the goods he had bought. Gareth was stopped after he had left the store.
The police rang Gareths mother to tell her where her son was.
contradiction
id_10
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. Other facts known at this stage are: Peter Graick was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Jo Hager supplied Peter with hard drugs. Peter used to frequent the local nightclub. Jo was the father of a two-month-old child. Jos child was in care and had been tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The neighbours took the victim to hospital.
neutral
id_11
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. Other facts known at this stage are: Peter Graick was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Jo Hager supplied Peter with hard drugs. Peter used to frequent the local nightclub. Jo was the father of a two-month-old child. Jos child was in care and had been tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The victims child was in care.
neutral
id_12
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. Other facts known at this stage are: Peter Graick was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Jo Hager supplied Peter with hard drugs. Peter used to frequent the local nightclub. Jo was the father of a two-month-old child. Jos child was in care and had been tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
Peter died of an overdose of drugs.
neutral
id_13
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. Other facts known at this stage are: Peter Graick was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Jo Hager supplied Peter with hard drugs. Peter used to frequent the local nightclub. Jo was the father of a two-month-old child. Jos child was in care and had been tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
Neighbours may have seen Jo Hager enter the flat where the victim was found.
entailment
id_14
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. Other facts known at this stage are: Peter Graick was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Jo Hager supplied Peter with hard drugs. Peter used to frequent the local nightclub. Jo was the father of a two-month-old child. Jos child was in care and had been tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The victim was homeless.
contradiction
id_15
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. The following facts are known: Jake Pratt was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Joe Horrocks supplied Jake with hard drugs. Jake used to frequent the local nightclub. Joe was the father of a two-month-old child. Joes child was in care and had tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
Jake died of an overdose of drugs.
neutral
id_16
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. The following facts are known: Jake Pratt was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Joe Horrocks supplied Jake with hard drugs. Jake used to frequent the local nightclub. Joe was the father of a two-month-old child. Joes child was in care and had tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The victims child was in care.
neutral
id_17
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. The following facts are known: Jake Pratt was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Joe Horrocks supplied Jake with hard drugs. Jake used to frequent the local nightclub. Joe was the father of a two-month-old child. Joes child was in care and had tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
Neighbours may have seen Joe enter the flat where the victim was found.
entailment
id_18
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. The following facts are known: Jake Pratt was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Joe Horrocks supplied Jake with hard drugs. Jake used to frequent the local nightclub. Joe was the father of a two-month-old child. Joes child was in care and had tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The neighbours took the victim to hospital.
neutral
id_19
A 19-year-old male was found unconscious in his flat on 28 December at 23.00. He was taken immediately to hospital where his stomach was pumped. Although he regained consciousness he died shortly afterwards. Neighbours recall seeing various young people going into the flat at all hours of the day and night. The following facts are known: Jake Pratt was a heroin addict. Empty bottles of spirits and paracetamol were found in the flat. Joe Horrocks supplied Jake with hard drugs. Jake used to frequent the local nightclub. Joe was the father of a two-month-old child. Joes child was in care and had tested HIV positive. The victim was a compulsive gambler. The victim had bruises on his head.
The victim was homeless.
contradiction
id_20
A 37-year-old woman was hit and badly injured when a sports car suddenly swerved off the road in the small village of Paddly. She was rushed immediately to Crownsby hospital at 12.45 on Wednesday 3 October, where she is now in a stable condition. A reliable witness said there seemed to be no obvious reason for the car to have swerved so suddenly. The car did not stop and raced out of the village before the police could follow. It is also known that: The victim was Jane Scolled. Jane worked for a firm of accountants called Sayerston. The manager, Mr Sayerston, collected old sports cars. Jane had the day off work on Wednesday 3 October. Jane had found copies of letters in the office, which indicated fraudulent behaviour by someone in the firm. Mr Sayerston plays golf every Wednesday afternoon. Janes father is a renowned barrister in Crownsby. At 13.05 a young cyclist was admitted to Crownsby hospital with severe head injuries after being knocked off his bicycle.
Jane Scolled was an accountant.
neutral
id_21
A 37-year-old woman was hit and badly injured when a sports car suddenly swerved off the road in the small village of Paddly. She was rushed immediately to Crownsby hospital at 12.45 on Wednesday 3 October, where she is now in a stable condition. A reliable witness said there seemed to be no obvious reason for the car to have swerved so suddenly. The car did not stop and raced out of the village before the police could follow. It is also known that: The victim was Jane Scolled. Jane worked for a firm of accountants called Sayerston. The manager, Mr Sayerston, collected old sports cars. Jane had the day off work on Wednesday 3 October. Jane had found copies of letters in the office, which indicated fraudulent behaviour by someone in the firm. Mr Sayerston plays golf every Wednesday afternoon. Janes father is a renowned barrister in Crownsby. At 13.05 a young cyclist was admitted to Crownsby hospital with severe head injuries after being knocked off his bicycle.
While on her lunch break from work Jane Scolled was hit by a car.
contradiction
id_22
A 37-year-old woman was hit and badly injured when a sports car suddenly swerved off the road in the small village of Paddly. She was rushed immediately to Crownsby hospital at 12.45 on Wednesday 3 October, where she is now in a stable condition. A reliable witness said there seemed to be no obvious reason for the car to have swerved so suddenly. The car did not stop and raced out of the village before the police could follow. It is also known that: The victim was Jane Scolled. Jane worked for a firm of accountants called Sayerston. The manager, Mr Sayerston, collected old sports cars. Jane had the day off work on Wednesday 3 October. Jane had found copies of letters in the office, which indicated fraudulent behaviour by someone in the firm. Mr Sayerston plays golf every Wednesday afternoon. Janes father is a renowned barrister in Crownsby. At 13.05 a young cyclist was admitted to Crownsby hospital with severe head injuries after being knocked off his bicycle.
Just after Janes accident a car had knocked a young cyclist off his bicycle.
neutral
id_23
A 37-year-old woman was hit and badly injured when a sports car suddenly swerved off the road in the small village of Paddly. She was rushed immediately to Crownsby hospital at 12.45 on Wednesday 3 October, where she is now in a stable condition. A reliable witness said there seemed to be no obvious reason for the car to have swerved so suddenly. The car did not stop and raced out of the village before the police could follow. It is also known that: The victim was Jane Scolled. Jane worked for a firm of accountants called Sayerston. The manager, Mr Sayerston, collected old sports cars. Jane had the day off work on Wednesday 3 October. Jane had found copies of letters in the office, which indicated fraudulent behaviour by someone in the firm. Mr Sayerston plays golf every Wednesday afternoon. Janes father is a renowned barrister in Crownsby. At 13.05 a young cyclist was admitted to Crownsby hospital with severe head injuries after being knocked off his bicycle.
Mr Sayerston was afraid of Janes father.
neutral
id_24
A 37-year-old woman was hit and badly injured when a sports car suddenly swerved off the road in the small village of Paddly. She was rushed immediately to Crownsby hospital at 12.45 on Wednesday 3 October, where she is now in a stable condition. A reliable witness said there seemed to be no obvious reason for the car to have swerved so suddenly. The car did not stop and raced out of the village before the police could follow. It is also known that: The victim was Jane Scolled. Jane worked for a firm of accountants called Sayerston. The manager, Mr Sayerston, collected old sports cars. Jane had the day off work on Wednesday 3 October. Jane had found copies of letters in the office, which indicated fraudulent behaviour by someone in the firm. Mr Sayerston plays golf every Wednesday afternoon. Janes father is a renowned barrister in Crownsby. At 13.05 a young cyclist was admitted to Crownsby hospital with severe head injuries after being knocked off his bicycle.
Crownsby hospital dealt with at least two road accident victims on Wednesday 3 October.
entailment
id_25
A 72-year-old widow was said to be comfortable but in a state of shock by a hospital spokesperson this morning. Mrs Susan Marsh suffered a head injury during the night when she disturbed an intruder who had broken into her ground floor flat on the Eastfield estate. This was the second time her home had been broken into in a month. In the first raid, the burglar broke a window and climbed in while Mrs Marsh was out for the evening visiting friends. The police believe that on this occasion the thief was disturbed when a neighbour returned home after walking his dog. It is also known that: Following the first break-in, workmen from the Housing Association boarded up the window to Mrs Marshs flat. Mrs Marsh disturbed a man in her living room. When she returned home on the occasion of the first break-in Mrs Marsh found that nothing had been stolen. On the night Mrs Marsh suffered her head injuries the thief escaped with some valuable items of silverware and a small amount of ready cash.
Although the burglar escaped with a small amount of cash and some silverware he failed to detect the money she had hidden in her bedroom.
neutral
id_26
A 72-year-old widow was said to be comfortable but in a state of shock by a hospital spokesperson this morning. Mrs Susan Marsh suffered a head injury during the night when she disturbed an intruder who had broken into her ground floor flat on the Eastfield estate. This was the second time her home had been broken into in a month. In the first raid, the burglar broke a window and climbed in while Mrs Marsh was out for the evening visiting friends. The police believe that on this occasion the thief was disturbed when a neighbour returned home after walking his dog. It is also known that: Following the first break-in, workmen from the Housing Association boarded up the window to Mrs Marshs flat. Mrs Marsh disturbed a man in her living room. When she returned home on the occasion of the first break-in Mrs Marsh found that nothing had been stolen. On the night Mrs Marsh suffered her head injuries the thief escaped with some valuable items of silverware and a small amount of ready cash.
The first break-in occurred when Mrs Marsh was out playing bingo and her neighbour was walking his dog.
contradiction
id_27
A 72-year-old widow was said to be comfortable but in a state of shock by a hospital spokesperson this morning. Mrs Susan Marsh suffered a head injury during the night when she disturbed an intruder who had broken into her ground floor flat on the Eastfield estate. This was the second time her home had been broken into in a month. In the first raid, the burglar broke a window and climbed in while Mrs Marsh was out for the evening visiting friends. The police believe that on this occasion the thief was disturbed when a neighbour returned home after walking his dog. It is also known that: Following the first break-in, workmen from the Housing Association boarded up the window to Mrs Marshs flat. Mrs Marsh disturbed a man in her living room. When she returned home on the occasion of the first break-in Mrs Marsh found that nothing had been stolen. On the night Mrs Marsh suffered her head injuries the thief escaped with some valuable items of silverware and a small amount of ready cash.
The intruder entered the flat by the same means on both occasions.
neutral
id_28
A 72-year-old widow was said to be comfortable but in a state of shock by a hospital spokesperson this morning. Mrs Susan Marsh suffered a head injury during the night when she disturbed an intruder who had broken into her ground floor flat on the Eastfield estate. This was the second time her home had been broken into in a month. In the first raid, the burglar broke a window and climbed in while Mrs Marsh was out for the evening visiting friends. The police believe that on this occasion the thief was disturbed when a neighbour returned home after walking his dog. It is also known that: Following the first break-in, workmen from the Housing Association boarded up the window to Mrs Marshs flat. Mrs Marsh disturbed a man in her living room. When she returned home on the occasion of the first break-in Mrs Marsh found that nothing had been stolen. On the night Mrs Marsh suffered her head injuries the thief escaped with some valuable items of silverware and a small amount of ready cash.
After being unsuccessful at the first attempt to rob Mrs Marshs flat, the burglar broke in a second time.
neutral
id_29
A 72-year-old widow was said to be comfortable but in a state of shock by a hospital spokesperson this morning. Mrs Susan Marsh suffered a head injury during the night when she disturbed an intruder who had broken into her ground floor flat on the Eastfield estate. This was the second time her home had been broken into in a month. In the first raid, the burglar broke a window and climbed in while Mrs Marsh was out for the evening visiting friends. The police believe that on this occasion the thief was disturbed when a neighbour returned home after walking his dog. It is also known that: Following the first break-in, workmen from the Housing Association boarded up the window to Mrs Marshs flat. Mrs Marsh disturbed a man in her living room. When she returned home on the occasion of the first break-in Mrs Marsh found that nothing had been stolen. On the night Mrs Marsh suffered her head injuries the thief escaped with some valuable items of silverware and a small amount of ready cash.
On being wakened during the night Mrs Marsh went downstairs to investigate what had caused the noise.
contradiction
id_30
A Book Review Dog Will Have His Day by Fred Vargas This is another crime thriller from the prize-winning novelist Fred Vargas. Despite the misleading first name and Spanish surname, the author is actually a French woman Frederique Audoin-Rouzeau. She adopted her nom de plume from the Ava Gardner character, the Spanish dancer Maria Vargas, in the 1954 film The Barefoot Contessa. Although a writer of crime fiction, Frederique Audoin-Rouzeau is primarily a medieval historian and archaeologist. Her detective books are immensely popular: over 10 million copies have been sold worldwide and they have been translated into 45 languages. She is a little mystified by her success after all, it is just a hobby and finds it quite amusing. As an archaeologist specialising in epidemiology, she produced the definitive study on the transmission of the bubonic plague a book that she says after seven years of intensive work trying to find the real vector of the plague sold only a thousand copies. It is even more paradoxical to learn that she wrote each of her novels in three weeks flat, during her annual summer holidays. Even when she took a break from archaeology to work full-time on her fiction, the first draft was still finished within the same time frame. She uses the ensuing months to polish and tidy the prose. Perhaps by virtue of Vargas archaeological background, Dog Will Have His Day starts with a bone. It turns out to be the top joint of a womans big toe, found by chance in Paris by Louis Kehlweiler, a former special investigator for the Ministry of Justice. He presents his find to the local police, who decline to do anything about it. However, Louis, convinced that a murder has taken place, decides to focus on finding the body to which the toe belongs. Most of Vargas characters are eccentrics in some way and Louis is no exception. He carries a toad named Bufo around in his pocket and consults it on matters of importance, although Louis says: You have to keep it simple with Bufo, just basic ideas ... He cant cope with anything else. Sometimes I try a bit harder, a bit of philosophy even, to improve his mind... He was much more stupid when I first got him. Louis recruits the assistance of Marc Vandoosler, one of the evangelists of a previous novel The Three Evangelists. The evangelists, actually unemployed historians, share a rundown house and Marc is the medieval researcher among them. Later, another of the evangelists, Mathias, the hunter-gatherer (a prehistoric specialist), joins the undertaking. As the bone fragment had obviously passed through a dogs digestive system, Louis first mission is to track down the dog in question. Ringo, a pit bull, is eventually identified as the culprit and his owner is tracked to a tiny Breton fishing village. There, Marc and Louis establish that the corpse of an old woman missing her big toe had been discovered on the beach a few days earlier. The investigation takes in some interesting characters, including a collector of antique typewriters. Although initially suspecting just one murder, Louis, with the help of Marc and Mathias, manages to solve three homicides and unmask a would-be mayoral candidate who is in fact wanted for crimes against humanity dating from the Second World War. As an aficionado of crime fiction, I find Vargas prose far from conventional. It is original, enthralling and witty, occasionally whimsical and surreal, but always with a delightful simplicity. The main characters have their little catchphrases such as I could do with a beer (Louis) which makes them endearingly human. She has a cast of quirky provincial characters expertly portrayed; far removed from the darkly humorous, brutally violent, hard-edged Scandinavian realism which is so widely admired these days. Vargas definitely swims against the tide of realism there is a lack of elaborate description no detailed depictions of the meals eaten, clothes worn, music listened to or cars driven. This is enormously refreshing: frankly, how essential is it to know the make of a vehicle or the brand of beer? Unless, of course, it is inextricably linked to the unravelling of the plot. Comparatively speaking, the plot of this book appears at first to be a little on the light side although her bizarre characters and inventiveness keep the reader well entertained. However, the story suddenly becomes convoluted towards the end and the denouement rapidly ensues, leaving the reader feeling short-changed. It is not as ingenious or inspired as The Three Evangelists one of her finest novels and a hard act to follow but the well-judged inclusion of Marc leaves the reader wanting to see more of the other two evangelists. Despite some shortcomings, it is still a brilliant read and I remain a steadfast fan.
The Three Evangelists is Vargas best-selling novel.
neutral
id_31
A Book Review Dog Will Have His Day by Fred Vargas This is another crime thriller from the prize-winning novelist Fred Vargas. Despite the misleading first name and Spanish surname, the author is actually a French woman Frederique Audoin-Rouzeau. She adopted her nom de plume from the Ava Gardner character, the Spanish dancer Maria Vargas, in the 1954 film The Barefoot Contessa. Although a writer of crime fiction, Frederique Audoin-Rouzeau is primarily a medieval historian and archaeologist. Her detective books are immensely popular: over 10 million copies have been sold worldwide and they have been translated into 45 languages. She is a little mystified by her success after all, it is just a hobby and finds it quite amusing. As an archaeologist specialising in epidemiology, she produced the definitive study on the transmission of the bubonic plague a book that she says after seven years of intensive work trying to find the real vector of the plague sold only a thousand copies. It is even more paradoxical to learn that she wrote each of her novels in three weeks flat, during her annual summer holidays. Even when she took a break from archaeology to work full-time on her fiction, the first draft was still finished within the same time frame. She uses the ensuing months to polish and tidy the prose. Perhaps by virtue of Vargas archaeological background, Dog Will Have His Day starts with a bone. It turns out to be the top joint of a womans big toe, found by chance in Paris by Louis Kehlweiler, a former special investigator for the Ministry of Justice. He presents his find to the local police, who decline to do anything about it. However, Louis, convinced that a murder has taken place, decides to focus on finding the body to which the toe belongs. Most of Vargas characters are eccentrics in some way and Louis is no exception. He carries a toad named Bufo around in his pocket and consults it on matters of importance, although Louis says: You have to keep it simple with Bufo, just basic ideas ... He cant cope with anything else. Sometimes I try a bit harder, a bit of philosophy even, to improve his mind... He was much more stupid when I first got him. Louis recruits the assistance of Marc Vandoosler, one of the evangelists of a previous novel The Three Evangelists. The evangelists, actually unemployed historians, share a rundown house and Marc is the medieval researcher among them. Later, another of the evangelists, Mathias, the hunter-gatherer (a prehistoric specialist), joins the undertaking. As the bone fragment had obviously passed through a dogs digestive system, Louis first mission is to track down the dog in question. Ringo, a pit bull, is eventually identified as the culprit and his owner is tracked to a tiny Breton fishing village. There, Marc and Louis establish that the corpse of an old woman missing her big toe had been discovered on the beach a few days earlier. The investigation takes in some interesting characters, including a collector of antique typewriters. Although initially suspecting just one murder, Louis, with the help of Marc and Mathias, manages to solve three homicides and unmask a would-be mayoral candidate who is in fact wanted for crimes against humanity dating from the Second World War. As an aficionado of crime fiction, I find Vargas prose far from conventional. It is original, enthralling and witty, occasionally whimsical and surreal, but always with a delightful simplicity. The main characters have their little catchphrases such as I could do with a beer (Louis) which makes them endearingly human. She has a cast of quirky provincial characters expertly portrayed; far removed from the darkly humorous, brutally violent, hard-edged Scandinavian realism which is so widely admired these days. Vargas definitely swims against the tide of realism there is a lack of elaborate description no detailed depictions of the meals eaten, clothes worn, music listened to or cars driven. This is enormously refreshing: frankly, how essential is it to know the make of a vehicle or the brand of beer? Unless, of course, it is inextricably linked to the unravelling of the plot. Comparatively speaking, the plot of this book appears at first to be a little on the light side although her bizarre characters and inventiveness keep the reader well entertained. However, the story suddenly becomes convoluted towards the end and the denouement rapidly ensues, leaving the reader feeling short-changed. It is not as ingenious or inspired as The Three Evangelists one of her finest novels and a hard act to follow but the well-judged inclusion of Marc leaves the reader wanting to see more of the other two evangelists. Despite some shortcomings, it is still a brilliant read and I remain a steadfast fan.
Detailed descriptions are only useful to the reader when they develop the storyline.
entailment
id_32
A Book Review Dog Will Have His Day by Fred Vargas This is another crime thriller from the prize-winning novelist Fred Vargas. Despite the misleading first name and Spanish surname, the author is actually a French woman Frederique Audoin-Rouzeau. She adopted her nom de plume from the Ava Gardner character, the Spanish dancer Maria Vargas, in the 1954 film The Barefoot Contessa. Although a writer of crime fiction, Frederique Audoin-Rouzeau is primarily a medieval historian and archaeologist. Her detective books are immensely popular: over 10 million copies have been sold worldwide and they have been translated into 45 languages. She is a little mystified by her success after all, it is just a hobby and finds it quite amusing. As an archaeologist specialising in epidemiology, she produced the definitive study on the transmission of the bubonic plague a book that she says after seven years of intensive work trying to find the real vector of the plague sold only a thousand copies. It is even more paradoxical to learn that she wrote each of her novels in three weeks flat, during her annual summer holidays. Even when she took a break from archaeology to work full-time on her fiction, the first draft was still finished within the same time frame. She uses the ensuing months to polish and tidy the prose. Perhaps by virtue of Vargas archaeological background, Dog Will Have His Day starts with a bone. It turns out to be the top joint of a womans big toe, found by chance in Paris by Louis Kehlweiler, a former special investigator for the Ministry of Justice. He presents his find to the local police, who decline to do anything about it. However, Louis, convinced that a murder has taken place, decides to focus on finding the body to which the toe belongs. Most of Vargas characters are eccentrics in some way and Louis is no exception. He carries a toad named Bufo around in his pocket and consults it on matters of importance, although Louis says: You have to keep it simple with Bufo, just basic ideas ... He cant cope with anything else. Sometimes I try a bit harder, a bit of philosophy even, to improve his mind... He was much more stupid when I first got him. Louis recruits the assistance of Marc Vandoosler, one of the evangelists of a previous novel The Three Evangelists. The evangelists, actually unemployed historians, share a rundown house and Marc is the medieval researcher among them. Later, another of the evangelists, Mathias, the hunter-gatherer (a prehistoric specialist), joins the undertaking. As the bone fragment had obviously passed through a dogs digestive system, Louis first mission is to track down the dog in question. Ringo, a pit bull, is eventually identified as the culprit and his owner is tracked to a tiny Breton fishing village. There, Marc and Louis establish that the corpse of an old woman missing her big toe had been discovered on the beach a few days earlier. The investigation takes in some interesting characters, including a collector of antique typewriters. Although initially suspecting just one murder, Louis, with the help of Marc and Mathias, manages to solve three homicides and unmask a would-be mayoral candidate who is in fact wanted for crimes against humanity dating from the Second World War. As an aficionado of crime fiction, I find Vargas prose far from conventional. It is original, enthralling and witty, occasionally whimsical and surreal, but always with a delightful simplicity. The main characters have their little catchphrases such as I could do with a beer (Louis) which makes them endearingly human. She has a cast of quirky provincial characters expertly portrayed; far removed from the darkly humorous, brutally violent, hard-edged Scandinavian realism which is so widely admired these days. Vargas definitely swims against the tide of realism there is a lack of elaborate description no detailed depictions of the meals eaten, clothes worn, music listened to or cars driven. This is enormously refreshing: frankly, how essential is it to know the make of a vehicle or the brand of beer? Unless, of course, it is inextricably linked to the unravelling of the plot. Comparatively speaking, the plot of this book appears at first to be a little on the light side although her bizarre characters and inventiveness keep the reader well entertained. However, the story suddenly becomes convoluted towards the end and the denouement rapidly ensues, leaving the reader feeling short-changed. It is not as ingenious or inspired as The Three Evangelists one of her finest novels and a hard act to follow but the well-judged inclusion of Marc leaves the reader wanting to see more of the other two evangelists. Despite some shortcomings, it is still a brilliant read and I remain a steadfast fan.
Vargas style of writing is typical of crime fiction.
contradiction
id_33
A Book Review Dog Will Have His Day by Fred Vargas This is another crime thriller from the prize-winning novelist Fred Vargas. Despite the misleading first name and Spanish surname, the author is actually a French woman Frederique Audoin-Rouzeau. She adopted her nom de plume from the Ava Gardner character, the Spanish dancer Maria Vargas, in the 1954 film The Barefoot Contessa. Although a writer of crime fiction, Frederique Audoin-Rouzeau is primarily a medieval historian and archaeologist. Her detective books are immensely popular: over 10 million copies have been sold worldwide and they have been translated into 45 languages. She is a little mystified by her success after all, it is just a hobby and finds it quite amusing. As an archaeologist specialising in epidemiology, she produced the definitive study on the transmission of the bubonic plague a book that she says after seven years of intensive work trying to find the real vector of the plague sold only a thousand copies. It is even more paradoxical to learn that she wrote each of her novels in three weeks flat, during her annual summer holidays. Even when she took a break from archaeology to work full-time on her fiction, the first draft was still finished within the same time frame. She uses the ensuing months to polish and tidy the prose. Perhaps by virtue of Vargas archaeological background, Dog Will Have His Day starts with a bone. It turns out to be the top joint of a womans big toe, found by chance in Paris by Louis Kehlweiler, a former special investigator for the Ministry of Justice. He presents his find to the local police, who decline to do anything about it. However, Louis, convinced that a murder has taken place, decides to focus on finding the body to which the toe belongs. Most of Vargas characters are eccentrics in some way and Louis is no exception. He carries a toad named Bufo around in his pocket and consults it on matters of importance, although Louis says: You have to keep it simple with Bufo, just basic ideas ... He cant cope with anything else. Sometimes I try a bit harder, a bit of philosophy even, to improve his mind... He was much more stupid when I first got him. Louis recruits the assistance of Marc Vandoosler, one of the evangelists of a previous novel The Three Evangelists. The evangelists, actually unemployed historians, share a rundown house and Marc is the medieval researcher among them. Later, another of the evangelists, Mathias, the hunter-gatherer (a prehistoric specialist), joins the undertaking. As the bone fragment had obviously passed through a dogs digestive system, Louis first mission is to track down the dog in question. Ringo, a pit bull, is eventually identified as the culprit and his owner is tracked to a tiny Breton fishing village. There, Marc and Louis establish that the corpse of an old woman missing her big toe had been discovered on the beach a few days earlier. The investigation takes in some interesting characters, including a collector of antique typewriters. Although initially suspecting just one murder, Louis, with the help of Marc and Mathias, manages to solve three homicides and unmask a would-be mayoral candidate who is in fact wanted for crimes against humanity dating from the Second World War. As an aficionado of crime fiction, I find Vargas prose far from conventional. It is original, enthralling and witty, occasionally whimsical and surreal, but always with a delightful simplicity. The main characters have their little catchphrases such as I could do with a beer (Louis) which makes them endearingly human. She has a cast of quirky provincial characters expertly portrayed; far removed from the darkly humorous, brutally violent, hard-edged Scandinavian realism which is so widely admired these days. Vargas definitely swims against the tide of realism there is a lack of elaborate description no detailed depictions of the meals eaten, clothes worn, music listened to or cars driven. This is enormously refreshing: frankly, how essential is it to know the make of a vehicle or the brand of beer? Unless, of course, it is inextricably linked to the unravelling of the plot. Comparatively speaking, the plot of this book appears at first to be a little on the light side although her bizarre characters and inventiveness keep the reader well entertained. However, the story suddenly becomes convoluted towards the end and the denouement rapidly ensues, leaving the reader feeling short-changed. It is not as ingenious or inspired as The Three Evangelists one of her finest novels and a hard act to follow but the well-judged inclusion of Marc leaves the reader wanting to see more of the other two evangelists. Despite some shortcomings, it is still a brilliant read and I remain a steadfast fan.
The style has much in common with Scandinavian crime novels.
contradiction
id_34
A British surgeon has invented a new device that kills pain without the use of drugs. The gadget, which aims to reduce knee pain and the need for operations, is said to block the pain signal as the spinal cord is unable to carry both the pain and the vibration at the same time. This technique, using vibration to block pain signals, is not new; first appearing in the American civil war before being re-examined in the 1960s and eventually appearing on the market in 2009. This technology, which is powered by AAA batteries, is the first time the product has been widely available for knee pains.
In the American civil war the technology to specifically kill knee pain by the use of vibrations first invented
contradiction
id_35
A Coal is expected to continue to account for almost 27 per cent of the worlds enersy needs. However, with growins international awareness of pressures on the environment and the need to achieve sustainable development of enersy resources, the way in which the resource is extracted, transported and used is critical. wide range of pollution control devices and practices is in place at most modern mines and significant resources are spent on rehabilitating mined land. In addition, major research and development programmes are being devoted to lifting efficiencies and reducing emissions of greenhouse gases during coal consumption. Such measures are helping coal to maintain its status as a major supplier of the worlds energy needs. The coal industry has been targeted by its critics as a significant contributor to the greenhouse effect. However, the greenhouse effect is a natural phenomenon involving the increase in global surface temperature due to the presence of greenhouse gases - water vapour, carbon dioxide, tropospheric ozone, methane and nitrous oxide - in the atmosphere. Without the greenhouse effect, the earths average surface temperature would be 33-35 degrees C lower, or -15 degrees C. Life on earth, as we know it today, would not be possible. There is concern that this natural phenomenon is being altered by a greater build-up of gases from human activity, perhaps giving rise to additional warming and changes in the earths climate. This additional build-up and its forecast outcome has been called the enhanced greenhouse effect. Considerable uncertainty exists, however, about the enhanced greenhouse effect, particularly in relation to the extent and timing of any future increases in global temperature. Greenhouse gases arise from a wide range of sources and their increasing concentration is largely related to the compound effects of increased population, improved living standards and changes in lifestyle. From a current base of 5 billion, the United Nations predicts that the global population may stabilise in the twenty-first century between 8 and 14 billion, with more than 90 per cent of the projected increase taking place in the worlds developing nations. The associated activities to support that growth, particularly to produce the required energy and food, will cause further increases in greenhouse gas emissions. The challenge, therefore, is to attain a sustainable balance between population, economic growth and the environment. The major greenhouse gas emissions from human activities are carbon dioxide (CO 2 ), methane and nitrous oxide. Chlorofluorocarbons (CFCs) are the only major contributor to the greenhouse effect that does not occur naturally, coming from such sources as refrigeration, plastics and manufacture. Coals total contribution to greenhouse gas emissions is thought to be about 18 per cent, with about half of this coming from electricity generation. The world-wide coal industry allocates extensive resources to researching and developing new technologies and ways of capturing greenhouse gases. Efficiencies are likely to be improved 108Reading dramatically, and hence CO 2 emissions reduced, through combustion and gasification techniques which are now at pilot and demonstration stages. Clean coal is another avenue for improving fuel conversion efficiency. Investigations are under way into superclean coal (3-5 per cent ash) and ultraclean coal (less than 1 per cent ash). Superclean coal has the potential to enhance the combustion efficiency of conventional pulverised fuel power plants. Ultraclean coal will enable coal to be used in advanced power systems such as coal-fired gas turbines which, when operated in combined cycle, have the potential to achieve much greater efficiencies. Defendants of mining point out that, environmentally, coal mining has two important factors in its favour. It makes only temporary use of the land and produces no toxic chemical wastes. By carefully pre-planning projects, implementing pollution control measures, monitoring the effects of mining and rehabilitating mined areas, the coal industry minimises the impact on the neighbouring community, the immediate environment and long-term land capability. Dust levels are controlled by spraying roads and stockpiles, and water pollution is controlled by carefully separating clean water runoff from runoff which contains sediments or salt from mine workings. The latter is treated and re-used for dust suppression. Noise is controlled by modifying equipment and by using insulation and sound enclosures around machinery. Since mining activities represent only a temporary use of the land, extensive rehabilitation measures are adopted to ensure that land capability after mining meets agreed and appropriate standards which, in some cases, are superior to the lands pre-mining condition. Where the mining is underground, the surface area can be simultaneously used for forests, cattle grazing and crop raising, or even reservoirs and urban development, with little or no disruption to the existing land use. In all cases, mining is subject to stringent controls and approvals processes. In open-cut operations, however, the land is used exclusively for mining but land rehabilitation measures generally progress with the mines development. As core samples are extracted to assess the quality and quantity of coal at a site, they are also analysed to assess the ability of the soil or subsoil material to support vegetation. Topsoils are stripped and stockpiled prior to mining for subsequent dispersal over rehabilitated areas. As mining ceases in one section of the open-cut, the disturbed area is reshaped. Drainage within and off the site is carefully designed to make the new land surface as stable as the local environment allows: often dams are built to protect the area from soil erosion and to serve as permanent sources of water. Based on the soil requirements, the land is suitably fertilised and revegetated.
The greatest threats to the environment are the gases produced by industries which support the high standard of living of a growing world population.
entailment
id_36
A Coal is expected to continue to account for almost 27 per cent of the worlds enersy needs. However, with growins international awareness of pressures on the environment and the need to achieve sustainable development of enersy resources, the way in which the resource is extracted, transported and used is critical. wide range of pollution control devices and practices is in place at most modern mines and significant resources are spent on rehabilitating mined land. In addition, major research and development programmes are being devoted to lifting efficiencies and reducing emissions of greenhouse gases during coal consumption. Such measures are helping coal to maintain its status as a major supplier of the worlds energy needs. The coal industry has been targeted by its critics as a significant contributor to the greenhouse effect. However, the greenhouse effect is a natural phenomenon involving the increase in global surface temperature due to the presence of greenhouse gases - water vapour, carbon dioxide, tropospheric ozone, methane and nitrous oxide - in the atmosphere. Without the greenhouse effect, the earths average surface temperature would be 33-35 degrees C lower, or -15 degrees C. Life on earth, as we know it today, would not be possible. There is concern that this natural phenomenon is being altered by a greater build-up of gases from human activity, perhaps giving rise to additional warming and changes in the earths climate. This additional build-up and its forecast outcome has been called the enhanced greenhouse effect. Considerable uncertainty exists, however, about the enhanced greenhouse effect, particularly in relation to the extent and timing of any future increases in global temperature. Greenhouse gases arise from a wide range of sources and their increasing concentration is largely related to the compound effects of increased population, improved living standards and changes in lifestyle. From a current base of 5 billion, the United Nations predicts that the global population may stabilise in the twenty-first century between 8 and 14 billion, with more than 90 per cent of the projected increase taking place in the worlds developing nations. The associated activities to support that growth, particularly to produce the required energy and food, will cause further increases in greenhouse gas emissions. The challenge, therefore, is to attain a sustainable balance between population, economic growth and the environment. The major greenhouse gas emissions from human activities are carbon dioxide (CO 2 ), methane and nitrous oxide. Chlorofluorocarbons (CFCs) are the only major contributor to the greenhouse effect that does not occur naturally, coming from such sources as refrigeration, plastics and manufacture. Coals total contribution to greenhouse gas emissions is thought to be about 18 per cent, with about half of this coming from electricity generation. The world-wide coal industry allocates extensive resources to researching and developing new technologies and ways of capturing greenhouse gases. Efficiencies are likely to be improved 108Reading dramatically, and hence CO 2 emissions reduced, through combustion and gasification techniques which are now at pilot and demonstration stages. Clean coal is another avenue for improving fuel conversion efficiency. Investigations are under way into superclean coal (3-5 per cent ash) and ultraclean coal (less than 1 per cent ash). Superclean coal has the potential to enhance the combustion efficiency of conventional pulverised fuel power plants. Ultraclean coal will enable coal to be used in advanced power systems such as coal-fired gas turbines which, when operated in combined cycle, have the potential to achieve much greater efficiencies. Defendants of mining point out that, environmentally, coal mining has two important factors in its favour. It makes only temporary use of the land and produces no toxic chemical wastes. By carefully pre-planning projects, implementing pollution control measures, monitoring the effects of mining and rehabilitating mined areas, the coal industry minimises the impact on the neighbouring community, the immediate environment and long-term land capability. Dust levels are controlled by spraying roads and stockpiles, and water pollution is controlled by carefully separating clean water runoff from runoff which contains sediments or salt from mine workings. The latter is treated and re-used for dust suppression. Noise is controlled by modifying equipment and by using insulation and sound enclosures around machinery. Since mining activities represent only a temporary use of the land, extensive rehabilitation measures are adopted to ensure that land capability after mining meets agreed and appropriate standards which, in some cases, are superior to the lands pre-mining condition. Where the mining is underground, the surface area can be simultaneously used for forests, cattle grazing and crop raising, or even reservoirs and urban development, with little or no disruption to the existing land use. In all cases, mining is subject to stringent controls and approvals processes. In open-cut operations, however, the land is used exclusively for mining but land rehabilitation measures generally progress with the mines development. As core samples are extracted to assess the quality and quantity of coal at a site, they are also analysed to assess the ability of the soil or subsoil material to support vegetation. Topsoils are stripped and stockpiled prior to mining for subsequent dispersal over rehabilitated areas. As mining ceases in one section of the open-cut, the disturbed area is reshaped. Drainage within and off the site is carefully designed to make the new land surface as stable as the local environment allows: often dams are built to protect the area from soil erosion and to serve as permanent sources of water. Based on the soil requirements, the land is suitably fertilised and revegetated.
The coal industry should be abandoned in favour of alternative energy sources because of the environmental damage it causes.
contradiction
id_37
A Coal is expected to continue to account for almost 27 per cent of the worlds enersy needs. However, with growins international awareness of pressures on the environment and the need to achieve sustainable development of enersy resources, the way in which the resource is extracted, transported and used is critical. wide range of pollution control devices and practices is in place at most modern mines and significant resources are spent on rehabilitating mined land. In addition, major research and development programmes are being devoted to lifting efficiencies and reducing emissions of greenhouse gases during coal consumption. Such measures are helping coal to maintain its status as a major supplier of the worlds energy needs. The coal industry has been targeted by its critics as a significant contributor to the greenhouse effect. However, the greenhouse effect is a natural phenomenon involving the increase in global surface temperature due to the presence of greenhouse gases - water vapour, carbon dioxide, tropospheric ozone, methane and nitrous oxide - in the atmosphere. Without the greenhouse effect, the earths average surface temperature would be 33-35 degrees C lower, or -15 degrees C. Life on earth, as we know it today, would not be possible. There is concern that this natural phenomenon is being altered by a greater build-up of gases from human activity, perhaps giving rise to additional warming and changes in the earths climate. This additional build-up and its forecast outcome has been called the enhanced greenhouse effect. Considerable uncertainty exists, however, about the enhanced greenhouse effect, particularly in relation to the extent and timing of any future increases in global temperature. Greenhouse gases arise from a wide range of sources and their increasing concentration is largely related to the compound effects of increased population, improved living standards and changes in lifestyle. From a current base of 5 billion, the United Nations predicts that the global population may stabilise in the twenty-first century between 8 and 14 billion, with more than 90 per cent of the projected increase taking place in the worlds developing nations. The associated activities to support that growth, particularly to produce the required energy and food, will cause further increases in greenhouse gas emissions. The challenge, therefore, is to attain a sustainable balance between population, economic growth and the environment. The major greenhouse gas emissions from human activities are carbon dioxide (CO 2 ), methane and nitrous oxide. Chlorofluorocarbons (CFCs) are the only major contributor to the greenhouse effect that does not occur naturally, coming from such sources as refrigeration, plastics and manufacture. Coals total contribution to greenhouse gas emissions is thought to be about 18 per cent, with about half of this coming from electricity generation. The world-wide coal industry allocates extensive resources to researching and developing new technologies and ways of capturing greenhouse gases. Efficiencies are likely to be improved 108Reading dramatically, and hence CO 2 emissions reduced, through combustion and gasification techniques which are now at pilot and demonstration stages. Clean coal is another avenue for improving fuel conversion efficiency. Investigations are under way into superclean coal (3-5 per cent ash) and ultraclean coal (less than 1 per cent ash). Superclean coal has the potential to enhance the combustion efficiency of conventional pulverised fuel power plants. Ultraclean coal will enable coal to be used in advanced power systems such as coal-fired gas turbines which, when operated in combined cycle, have the potential to achieve much greater efficiencies. Defendants of mining point out that, environmentally, coal mining has two important factors in its favour. It makes only temporary use of the land and produces no toxic chemical wastes. By carefully pre-planning projects, implementing pollution control measures, monitoring the effects of mining and rehabilitating mined areas, the coal industry minimises the impact on the neighbouring community, the immediate environment and long-term land capability. Dust levels are controlled by spraying roads and stockpiles, and water pollution is controlled by carefully separating clean water runoff from runoff which contains sediments or salt from mine workings. The latter is treated and re-used for dust suppression. Noise is controlled by modifying equipment and by using insulation and sound enclosures around machinery. Since mining activities represent only a temporary use of the land, extensive rehabilitation measures are adopted to ensure that land capability after mining meets agreed and appropriate standards which, in some cases, are superior to the lands pre-mining condition. Where the mining is underground, the surface area can be simultaneously used for forests, cattle grazing and crop raising, or even reservoirs and urban development, with little or no disruption to the existing land use. In all cases, mining is subject to stringent controls and approvals processes. In open-cut operations, however, the land is used exclusively for mining but land rehabilitation measures generally progress with the mines development. As core samples are extracted to assess the quality and quantity of coal at a site, they are also analysed to assess the ability of the soil or subsoil material to support vegetation. Topsoils are stripped and stockpiled prior to mining for subsequent dispersal over rehabilitated areas. As mining ceases in one section of the open-cut, the disturbed area is reshaped. Drainage within and off the site is carefully designed to make the new land surface as stable as the local environment allows: often dams are built to protect the area from soil erosion and to serve as permanent sources of water. Based on the soil requirements, the land is suitably fertilised and revegetated.
CFC emissions have been substantially reduced in recent years.
neutral
id_38
A Coal is expected to continue to account for almost 27 per cent of the worlds enersy needs. However, with growins international awareness of pressures on the environment and the need to achieve sustainable development of enersy resources, the way in which the resource is extracted, transported and used is critical. wide range of pollution control devices and practices is in place at most modern mines and significant resources are spent on rehabilitating mined land. In addition, major research and development programmes are being devoted to lifting efficiencies and reducing emissions of greenhouse gases during coal consumption. Such measures are helping coal to maintain its status as a major supplier of the worlds energy needs. The coal industry has been targeted by its critics as a significant contributor to the greenhouse effect. However, the greenhouse effect is a natural phenomenon involving the increase in global surface temperature due to the presence of greenhouse gases - water vapour, carbon dioxide, tropospheric ozone, methane and nitrous oxide - in the atmosphere. Without the greenhouse effect, the earths average surface temperature would be 33-35 degrees C lower, or -15 degrees C. Life on earth, as we know it today, would not be possible. There is concern that this natural phenomenon is being altered by a greater build-up of gases from human activity, perhaps giving rise to additional warming and changes in the earths climate. This additional build-up and its forecast outcome has been called the enhanced greenhouse effect. Considerable uncertainty exists, however, about the enhanced greenhouse effect, particularly in relation to the extent and timing of any future increases in global temperature. Greenhouse gases arise from a wide range of sources and their increasing concentration is largely related to the compound effects of increased population, improved living standards and changes in lifestyle. From a current base of 5 billion, the United Nations predicts that the global population may stabilise in the twenty-first century between 8 and 14 billion, with more than 90 per cent of the projected increase taking place in the worlds developing nations. The associated activities to support that growth, particularly to produce the required energy and food, will cause further increases in greenhouse gas emissions. The challenge, therefore, is to attain a sustainable balance between population, economic growth and the environment. The major greenhouse gas emissions from human activities are carbon dioxide (CO 2 ), methane and nitrous oxide. Chlorofluorocarbons (CFCs) are the only major contributor to the greenhouse effect that does not occur naturally, coming from such sources as refrigeration, plastics and manufacture. Coals total contribution to greenhouse gas emissions is thought to be about 18 per cent, with about half of this coming from electricity generation. The world-wide coal industry allocates extensive resources to researching and developing new technologies and ways of capturing greenhouse gases. Efficiencies are likely to be improved 108Reading dramatically, and hence CO 2 emissions reduced, through combustion and gasification techniques which are now at pilot and demonstration stages. Clean coal is another avenue for improving fuel conversion efficiency. Investigations are under way into superclean coal (3-5 per cent ash) and ultraclean coal (less than 1 per cent ash). Superclean coal has the potential to enhance the combustion efficiency of conventional pulverised fuel power plants. Ultraclean coal will enable coal to be used in advanced power systems such as coal-fired gas turbines which, when operated in combined cycle, have the potential to achieve much greater efficiencies. Defendants of mining point out that, environmentally, coal mining has two important factors in its favour. It makes only temporary use of the land and produces no toxic chemical wastes. By carefully pre-planning projects, implementing pollution control measures, monitoring the effects of mining and rehabilitating mined areas, the coal industry minimises the impact on the neighbouring community, the immediate environment and long-term land capability. Dust levels are controlled by spraying roads and stockpiles, and water pollution is controlled by carefully separating clean water runoff from runoff which contains sediments or salt from mine workings. The latter is treated and re-used for dust suppression. Noise is controlled by modifying equipment and by using insulation and sound enclosures around machinery. Since mining activities represent only a temporary use of the land, extensive rehabilitation measures are adopted to ensure that land capability after mining meets agreed and appropriate standards which, in some cases, are superior to the lands pre-mining condition. Where the mining is underground, the surface area can be simultaneously used for forests, cattle grazing and crop raising, or even reservoirs and urban development, with little or no disruption to the existing land use. In all cases, mining is subject to stringent controls and approvals processes. In open-cut operations, however, the land is used exclusively for mining but land rehabilitation measures generally progress with the mines development. As core samples are extracted to assess the quality and quantity of coal at a site, they are also analysed to assess the ability of the soil or subsoil material to support vegetation. Topsoils are stripped and stockpiled prior to mining for subsequent dispersal over rehabilitated areas. As mining ceases in one section of the open-cut, the disturbed area is reshaped. Drainage within and off the site is carefully designed to make the new land surface as stable as the local environment allows: often dams are built to protect the area from soil erosion and to serve as permanent sources of water. Based on the soil requirements, the land is suitably fertilised and revegetated.
World population in the twenty-first century will probably exceed 8 billion.
entailment
id_39
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
Philips missed notification about the ice from Evans because the Titanics wireless system was not functioning at the time.
contradiction
id_40
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
Howell believed the captains failure to reduce speed was an irresponsible action.
contradiction
id_41
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
A lifeboat drill would have saved more lives.
neutral
id_42
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
Captain Smith knew there was ice in the area.
entailment
id_43
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
The Titanic was able to seat more passengers in lifeboats than the Board of Trade required.
entailment
id_44
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
The binoculars for the men on watch had been left in a crew locker in Southampton.
contradiction
id_45
A Disaster of Titanic Proportions At 11:39 p. m. on the evening of Sunday, 14 April 1912, lookouts Frederick Fleet and Reginald Lee on the forward mast of the Titanic sighted an eerie, black mass coming into view directly in front of the ship. Fleet picked up the phone to the helm, waited for Sixth Officer Moody to answer, and yelled Iceberg, right ahead! The greatest disaster in maritime history was about to be set in motion. Thirty-seven seconds later, despite the efforts of officers in the bridge and engine room to steer around the iceberg, the Titanic struck a piece of submerged ice, bursting rivets in the ships hull and flooding the first five watertight compartments. The ships designer, Thomas Andrews, carried out a visual inspection of the ships damage and informed Captain Smith at midnight that the ship would sink in less than two hours. By 1 2:30 a. m. , the lifeboats were being filled with women and children, after Smith had given the command for them to be uncovered and swung out 15 minutes earlier. The first lifeboat was successfully lowered 15 minutes later, with only 28 of its 65 seats occupied. By 1:15 a. m. , the waterline was beginning to reach the Titanics name on the ships bow, and over the next hour, every lifeboat would be released as officers struggled to maintain order amongst the growing panic on board. The dosing moments of the Titanics sinking began shortly after 2 a. m. , as the last lifeboat was lowered and the ships propellers lifted out of the water, leaving the 1,500 passengers still on board to surge towards the stern. At 2:17 a. m. , Harold Bride and Jack Philips tapped out their last wireless message after being relieved of duty as the ships wireless operators, and the ships band stopped playing. Less than a minute later, occupants of the lifeboats witnessed the ships lights flash once, then go black, and a huge roar signalled the Titanics contents plunging towards the bow, causing the front half of the ship to break off and go under. The Titanics stem bobbed up momentarily, and at 2:20 a. m. , the ship finally disappeared beneath the frigid waters. What or who was responsible for the scale of this catastrophe? Explanations abound, some that focus on very small details. Due to a last-minute change in the ships officer line-up, iceberg lookouts Frederick Fleet and Reginald Lee were making do without a pair of binoculars that an officer transferred off the ship in Southampton had left in a cupboard onboard, unbeknownst to any of the ships crew. Fleet, who survived the sinking, insisted at a subsequent inquiry that he could have identified the iceberg in time to avert disaster if he had been in possession of the binoculars. Less than an hour before the Titanic struck the iceberg, wireless operator Cyril Evans on the California, located just 20 miles to the north, tried to contact operator Jack Philips on the Titanic to warn him of pack ice in the area. Shut up, shut up, youre jamming my signal, Philips replied. Im busy. The Titanics wireless system had broken down for several hours earlier that day, and Philips was clearing a backlog of personal messages that passengers had requested to be sent to family and friends in the USA. Nevertheless, Captain Smith had maintained the ships speed of 22 knots despite multiple earlier warnings of ice ahead. It has been suggested that Smith was under pressure to make headlines by arriving early in New York, but maritime historians such as Richard Howell have countered this perception, noting that Smith was simply following common procedure at the time, and not behaving recklessly. One of the strongest explanations for the severe loss of life has been the fact that the Titanic did not carry enough lifeboats for everyone on board. Maritime regulations at the time tied lifeboat capacity to the ship size, not to the number of passengers on board. This meant that the Titanic, with room for 1,178 of its 2,222 passengers, actually surpassed the Board of Trades requirement that it carry lifeboats for 1,060 of its passengers. Nevertheless, with lifeboats being lowered less than half full in many cases, and only 71 2 passengers surviving despite a two-and-a-half-hour window of opportunity, more lifeboats would not have guaranteed more survivors in the absence of better training and preparation. Many passengers were confused about where to go after the order to launch lifeboats was given; a lifeboat drill scheduled for earlier on the same day that the Titanic struck the iceberg was cancelled by Captain Smith in order to allow passengers to attend church.
The missing binoculars were the major factor leading to the collision with the iceberg.
neutral
id_46
A European spacecraft took off today to spearhead the search for another "Earth" among the stars. The Corot space telescope blasted off aboard a Russian Soyuz rocket from the Baikonur cosmodrome in Kazakhstan shortly after 2.20pm. Corot, short for convection rotation and planetary transits, is the first instrument capable of finding small rocky planets beyond the solar system. Any such planet situated in the right orbit stands a good chance of having liquid water on its surface, and quite possibly life, although a leading scientist involved in the project said it was unlikely to find "any little green men". Developed by the French space agency, CNES, and partnered by the European Space Agency (ESA), Austria, Belgium, Germany, Brazil and Spain, Corot will monitor around 120,000 stars with its 27cm telescope from a polar orbit 514 miles above the Earth. Over two and a half years, it will focus on five to six different areas of the sky, measuring the brightness of about 10,000 stars every 512 seconds. "At the present moment we are hoping to find out more about the nature of planets around stars which are potential habitats. We are looking at habitable planets, not inhabited planets. We are not going to find any little green men, " Professor Ian Roxburgh, an ESA scientist who has been involved with Corot since its inception, told the BBC Radio 4 Today programme. Prof Roxburgh said it was hoped Corot would find "rocky planets that could develop an atmosphere and, if they are the right distance from their parent star, they could have water". To search for planets, the telescope will look for the dimming of starlight caused when an object passes in front of a star, known as a "transit". Although it will take more sophisticated space telescopes planned in the next 10 years to confirm the presence of an Earth-like planet with oxygen and liquid water, Corot will let scientists know where to point their lenses. Measurements of minute changes in brightness will enable scientists to detect giant Jupiter-like gas planets as well as small rocky ones. It is the rocky planets - that could be no bigger than about twice the size of the Earth - which will cause the most excitement. Scientists expect to find between 10 and 40 of these smaller planets. Corot will also probe into stellar interiors by studying the acoustic waves that ripple across the surface of stars, a technique called "asteroseismology". The nature of the ripples allows astronomers to calculate a stars precise mass, age and chemical composition. "A planet passing in front of a star can be detected by the fall in light from that star. Small oscillations of the star also produce changes in the light emitted, which reveal what the star is made of and how they are structured internally. This data will provide a major boost to our understanding of how stars form and evolve, " Prof Roxburgh said. Since the discovery in 1995 of the first "exoplanet" - a planet orbiting a star other than the Sun - more than 200 others have been found by ground-based observatories. Until now the usual method of finding exoplanets has been to detect the "wobble" their gravity imparts on parent stars. But only giant gaseous planets bigger than Jupiter can be found this way, and they are unlikely to harbour life. In the 2010s, ESA plans to launch Darwin, a fleet of four or five interlinked space telescopes that will not only spot small rocky planets, but analyse their atmospheres for signs of biological activity. At around the same time, the US space agency, Nasa, will launch Terrestrial Planet Finder, another space telescope designed to locate Earth-like planets.
Corot can tell whether there is another Earth-like planet.
contradiction
id_47
A European spacecraft took off today to spearhead the search for another "Earth" among the stars. The Corot space telescope blasted off aboard a Russian Soyuz rocket from the Baikonur cosmodrome in Kazakhstan shortly after 2.20pm. Corot, short for convection rotation and planetary transits, is the first instrument capable of finding small rocky planets beyond the solar system. Any such planet situated in the right orbit stands a good chance of having liquid water on its surface, and quite possibly life, although a leading scientist involved in the project said it was unlikely to find "any little green men". Developed by the French space agency, CNES, and partnered by the European Space Agency (ESA), Austria, Belgium, Germany, Brazil and Spain, Corot will monitor around 120,000 stars with its 27cm telescope from a polar orbit 514 miles above the Earth. Over two and a half years, it will focus on five to six different areas of the sky, measuring the brightness of about 10,000 stars every 512 seconds. "At the present moment we are hoping to find out more about the nature of planets around stars which are potential habitats. We are looking at habitable planets, not inhabited planets. We are not going to find any little green men, " Professor Ian Roxburgh, an ESA scientist who has been involved with Corot since its inception, told the BBC Radio 4 Today programme. Prof Roxburgh said it was hoped Corot would find "rocky planets that could develop an atmosphere and, if they are the right distance from their parent star, they could have water". To search for planets, the telescope will look for the dimming of starlight caused when an object passes in front of a star, known as a "transit". Although it will take more sophisticated space telescopes planned in the next 10 years to confirm the presence of an Earth-like planet with oxygen and liquid water, Corot will let scientists know where to point their lenses. Measurements of minute changes in brightness will enable scientists to detect giant Jupiter-like gas planets as well as small rocky ones. It is the rocky planets - that could be no bigger than about twice the size of the Earth - which will cause the most excitement. Scientists expect to find between 10 and 40 of these smaller planets. Corot will also probe into stellar interiors by studying the acoustic waves that ripple across the surface of stars, a technique called "asteroseismology". The nature of the ripples allows astronomers to calculate a stars precise mass, age and chemical composition. "A planet passing in front of a star can be detected by the fall in light from that star. Small oscillations of the star also produce changes in the light emitted, which reveal what the star is made of and how they are structured internally. This data will provide a major boost to our understanding of how stars form and evolve, " Prof Roxburgh said. Since the discovery in 1995 of the first "exoplanet" - a planet orbiting a star other than the Sun - more than 200 others have been found by ground-based observatories. Until now the usual method of finding exoplanets has been to detect the "wobble" their gravity imparts on parent stars. But only giant gaseous planets bigger than Jupiter can be found this way, and they are unlikely to harbour life. In the 2010s, ESA plans to launch Darwin, a fleet of four or five interlinked space telescopes that will not only spot small rocky planets, but analyse their atmospheres for signs of biological activity. At around the same time, the US space agency, Nasa, will launch Terrestrial Planet Finder, another space telescope designed to locate Earth-like planets.
Passing objects might cause a fall in light.
entailment
id_48
A European spacecraft took off today to spearhead the search for another "Earth" among the stars. The Corot space telescope blasted off aboard a Russian Soyuz rocket from the Baikonur cosmodrome in Kazakhstan shortly after 2.20pm. Corot, short for convection rotation and planetary transits, is the first instrument capable of finding small rocky planets beyond the solar system. Any such planet situated in the right orbit stands a good chance of having liquid water on its surface, and quite possibly life, although a leading scientist involved in the project said it was unlikely to find "any little green men". Developed by the French space agency, CNES, and partnered by the European Space Agency (ESA), Austria, Belgium, Germany, Brazil and Spain, Corot will monitor around 120,000 stars with its 27cm telescope from a polar orbit 514 miles above the Earth. Over two and a half years, it will focus on five to six different areas of the sky, measuring the brightness of about 10,000 stars every 512 seconds. "At the present moment we are hoping to find out more about the nature of planets around stars which are potential habitats. We are looking at habitable planets, not inhabited planets. We are not going to find any little green men, " Professor Ian Roxburgh, an ESA scientist who has been involved with Corot since its inception, told the BBC Radio 4 Today programme. Prof Roxburgh said it was hoped Corot would find "rocky planets that could develop an atmosphere and, if they are the right distance from their parent star, they could have water". To search for planets, the telescope will look for the dimming of starlight caused when an object passes in front of a star, known as a "transit". Although it will take more sophisticated space telescopes planned in the next 10 years to confirm the presence of an Earth-like planet with oxygen and liquid water, Corot will let scientists know where to point their lenses. Measurements of minute changes in brightness will enable scientists to detect giant Jupiter-like gas planets as well as small rocky ones. It is the rocky planets - that could be no bigger than about twice the size of the Earth - which will cause the most excitement. Scientists expect to find between 10 and 40 of these smaller planets. Corot will also probe into stellar interiors by studying the acoustic waves that ripple across the surface of stars, a technique called "asteroseismology". The nature of the ripples allows astronomers to calculate a stars precise mass, age and chemical composition. "A planet passing in front of a star can be detected by the fall in light from that star. Small oscillations of the star also produce changes in the light emitted, which reveal what the star is made of and how they are structured internally. This data will provide a major boost to our understanding of how stars form and evolve, " Prof Roxburgh said. Since the discovery in 1995 of the first "exoplanet" - a planet orbiting a star other than the Sun - more than 200 others have been found by ground-based observatories. Until now the usual method of finding exoplanets has been to detect the "wobble" their gravity imparts on parent stars. But only giant gaseous planets bigger than Jupiter can be found this way, and they are unlikely to harbour life. In the 2010s, ESA plans to launch Darwin, a fleet of four or five interlinked space telescopes that will not only spot small rocky planets, but analyse their atmospheres for signs of biological activity. At around the same time, the US space agency, Nasa, will launch Terrestrial Planet Finder, another space telescope designed to locate Earth-like planets.
BBC Radio 4 recently focuses on the broadcasting of Corot.
neutral
id_49
A European spacecraft took off today to spearhead the search for another "Earth" among the stars. The Corot space telescope blasted off aboard a Russian Soyuz rocket from the Baikonur cosmodrome in Kazakhstan shortly after 2.20pm. Corot, short for convection rotation and planetary transits, is the first instrument capable of finding small rocky planets beyond the solar system. Any such planet situated in the right orbit stands a good chance of having liquid water on its surface, and quite possibly life, although a leading scientist involved in the project said it was unlikely to find "any little green men". Developed by the French space agency, CNES, and partnered by the European Space Agency (ESA), Austria, Belgium, Germany, Brazil and Spain, Corot will monitor around 120,000 stars with its 27cm telescope from a polar orbit 514 miles above the Earth. Over two and a half years, it will focus on five to six different areas of the sky, measuring the brightness of about 10,000 stars every 512 seconds. "At the present moment we are hoping to find out more about the nature of planets around stars which are potential habitats. We are looking at habitable planets, not inhabited planets. We are not going to find any little green men, " Professor Ian Roxburgh, an ESA scientist who has been involved with Corot since its inception, told the BBC Radio 4 Today programme. Prof Roxburgh said it was hoped Corot would find "rocky planets that could develop an atmosphere and, if they are the right distance from their parent star, they could have water". To search for planets, the telescope will look for the dimming of starlight caused when an object passes in front of a star, known as a "transit". Although it will take more sophisticated space telescopes planned in the next 10 years to confirm the presence of an Earth-like planet with oxygen and liquid water, Corot will let scientists know where to point their lenses. Measurements of minute changes in brightness will enable scientists to detect giant Jupiter-like gas planets as well as small rocky ones. It is the rocky planets - that could be no bigger than about twice the size of the Earth - which will cause the most excitement. Scientists expect to find between 10 and 40 of these smaller planets. Corot will also probe into stellar interiors by studying the acoustic waves that ripple across the surface of stars, a technique called "asteroseismology". The nature of the ripples allows astronomers to calculate a stars precise mass, age and chemical composition. "A planet passing in front of a star can be detected by the fall in light from that star. Small oscillations of the star also produce changes in the light emitted, which reveal what the star is made of and how they are structured internally. This data will provide a major boost to our understanding of how stars form and evolve, " Prof Roxburgh said. Since the discovery in 1995 of the first "exoplanet" - a planet orbiting a star other than the Sun - more than 200 others have been found by ground-based observatories. Until now the usual method of finding exoplanets has been to detect the "wobble" their gravity imparts on parent stars. But only giant gaseous planets bigger than Jupiter can be found this way, and they are unlikely to harbour life. In the 2010s, ESA plans to launch Darwin, a fleet of four or five interlinked space telescopes that will not only spot small rocky planets, but analyse their atmospheres for signs of biological activity. At around the same time, the US space agency, Nasa, will launch Terrestrial Planet Finder, another space telescope designed to locate Earth-like planets.
Scientists are trying to find out about the planets that can be inhabited.
entailment
id_50
A History of Bread Although bread is not a staple food in all countries around the world, it is in many and in others it is of great importance. As an example, the UK bakery market is worth 3.6 billion annually and is one of the largest markets in the food industry. Total volume at present is approximately just under 4 billion units, the equivalent of almost 11 million loaves and packs sold every single day. There are three principal sectors that make up the UK baking industry. The larger baking companies produce around 80% of bread sold in the UK. In store bakeries within supermarkets produce about 17% and high street retail craft bakers produce the rest. In contrast to the UK, craft bakeries still dominate the market in many mainland European countries. This allows genuine craftspeople to keep alive and indeed develop skills that have been passed on for thousands of years. Recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago. Archaeologists have discovered simple stone mechanisms that were used for smashing and grinding various cereals to remove the inedible outer husks and to make the resulting grains into palatable and versatile food. As humans evolved, they mixed the resulting cracked and ground grains with water to create a variety of foods from this gruel to a stiffer porridge. By simply leaving the paste to dry out in the sun, a bread like crust would be formed. This early bread was particularly successful, when wild yeast from the air combined with the flour and water. The early Egyptians were curious about the bread rising and attempted to isolate the yeast, so that they could introduce it directly into their bread. Bakers experimented with leavened doughs and through these experiments Egyptians were the first to uncover the secret of yeast usage. Hence, the future of bread was assured. As travellers took bread making techniques and moved out from Egyptian lands, the art began spreading to all parts of Europe. A key civilisation was the Romans, who took their advanced bread techniques with them around Europe. The Romans preferred whiter bread, which was possible with the milling processes that they had refined. This led to white bread being perceived as the most valuable bread of them all, a preference that seems to have stuck with many people. The Romans also invented the first mechanical dough-mixer, powered by horses and donkeys. Both simple, yet elusive, the art of controlling the various ingredients and developing the skills required to turn grain and water into palatable bread, gave status to individuals and societies for thousands of years. The use of barley and wheat led man to live in communities and made the trade of baker one of the oldest crafts in the world. Before the Industrial Revolution, millers used windmills and watermills, depending on their locations, to turn the machinery that would grind wheat to flour. The Industrial Revolution really moved the process of bread making forwards. The first commercially successful engine did not appear until 1712, but it wasnt until the invention of the Boulton and Watt steam engine in 1786 that the process was advanced and refined. The first mill in London using the steam engines was so large and efficient that in one year in could produce more flour than the rest of the mills in London put together. In conjunction with steam power, a Swiss engineer in 1874 invented a new type of mill. He designed rollers made of steel that operated one above the other. It was called the reduction roller milling system, and these machines soon became accepted all over Europe. Since Egyptian times, yeast has been an essential part of bread making around the world, but yeast was not really understood properly until the 19th century. It was only with the invention of the microscope, followed by the pioneering scientific work of Louis Pasteur in the late 1860s, that yeast was identified as a living organism and agent responsible for dough leavening. Shortly following these discoveries, it became possible to isolate yeast in pure culture form. With this newfound knowledge the stage was set for commercial production of bakers yeast and this began around the turn of the 20th century. Since that time, bakers, scientists and yeast manufacturers have been working to find and produce pure strains of yeast that meet the exacting and specified needs of the baking industry. The basics of any bread dough are flour, water and of course yeast. As soon as these ingredients are stirred together, enzymes in the yeast and the flour cause large starch molecules to break down into simple sugars. The yeast metabolises these simple sugars and exudes a liquid that releases carbon dioxide into the doughs minute cells. As more and more tiny cells are filled, the dough rises and leavened bread is the result.
Pasteurs work in the 19th century allowed bread to be manufactured more cheaply.
neutral
id_51
A History of Bread Although bread is not a staple food in all countries around the world, it is in many and in others it is of great importance. As an example, the UK bakery market is worth 3.6 billion annually and is one of the largest markets in the food industry. Total volume at present is approximately just under 4 billion units, the equivalent of almost 11 million loaves and packs sold every single day. There are three principal sectors that make up the UK baking industry. The larger baking companies produce around 80% of bread sold in the UK. In store bakeries within supermarkets produce about 17% and high street retail craft bakers produce the rest. In contrast to the UK, craft bakeries still dominate the market in many mainland European countries. This allows genuine craftspeople to keep alive and indeed develop skills that have been passed on for thousands of years. Recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago. Archaeologists have discovered simple stone mechanisms that were used for smashing and grinding various cereals to remove the inedible outer husks and to make the resulting grains into palatable and versatile food. As humans evolved, they mixed the resulting cracked and ground grains with water to create a variety of foods from this gruel to a stiffer porridge. By simply leaving the paste to dry out in the sun, a bread like crust would be formed. This early bread was particularly successful, when wild yeast from the air combined with the flour and water. The early Egyptians were curious about the bread rising and attempted to isolate the yeast, so that they could introduce it directly into their bread. Bakers experimented with leavened doughs and through these experiments Egyptians were the first to uncover the secret of yeast usage. Hence, the future of bread was assured. As travellers took bread making techniques and moved out from Egyptian lands, the art began spreading to all parts of Europe. A key civilisation was the Romans, who took their advanced bread techniques with them around Europe. The Romans preferred whiter bread, which was possible with the milling processes that they had refined. This led to white bread being perceived as the most valuable bread of them all, a preference that seems to have stuck with many people. The Romans also invented the first mechanical dough-mixer, powered by horses and donkeys. Both simple, yet elusive, the art of controlling the various ingredients and developing the skills required to turn grain and water into palatable bread, gave status to individuals and societies for thousands of years. The use of barley and wheat led man to live in communities and made the trade of baker one of the oldest crafts in the world. Before the Industrial Revolution, millers used windmills and watermills, depending on their locations, to turn the machinery that would grind wheat to flour. The Industrial Revolution really moved the process of bread making forwards. The first commercially successful engine did not appear until 1712, but it wasnt until the invention of the Boulton and Watt steam engine in 1786 that the process was advanced and refined. The first mill in London using the steam engines was so large and efficient that in one year in could produce more flour than the rest of the mills in London put together. In conjunction with steam power, a Swiss engineer in 1874 invented a new type of mill. He designed rollers made of steel that operated one above the other. It was called the reduction roller milling system, and these machines soon became accepted all over Europe. Since Egyptian times, yeast has been an essential part of bread making around the world, but yeast was not really understood properly until the 19th century. It was only with the invention of the microscope, followed by the pioneering scientific work of Louis Pasteur in the late 1860s, that yeast was identified as a living organism and agent responsible for dough leavening. Shortly following these discoveries, it became possible to isolate yeast in pure culture form. With this newfound knowledge the stage was set for commercial production of bakers yeast and this began around the turn of the 20th century. Since that time, bakers, scientists and yeast manufacturers have been working to find and produce pure strains of yeast that meet the exacting and specified needs of the baking industry. The basics of any bread dough are flour, water and of course yeast. As soon as these ingredients are stirred together, enzymes in the yeast and the flour cause large starch molecules to break down into simple sugars. The yeast metabolises these simple sugars and exudes a liquid that releases carbon dioxide into the doughs minute cells. As more and more tiny cells are filled, the dough rises and leavened bread is the result.
The Romans were responsible for one of todays favoured types of bread.
entailment
id_52
A History of Bread Although bread is not a staple food in all countries around the world, it is in many and in others it is of great importance. As an example, the UK bakery market is worth 3.6 billion annually and is one of the largest markets in the food industry. Total volume at present is approximately just under 4 billion units, the equivalent of almost 11 million loaves and packs sold every single day. There are three principal sectors that make up the UK baking industry. The larger baking companies produce around 80% of bread sold in the UK. In store bakeries within supermarkets produce about 17% and high street retail craft bakers produce the rest. In contrast to the UK, craft bakeries still dominate the market in many mainland European countries. This allows genuine craftspeople to keep alive and indeed develop skills that have been passed on for thousands of years. Recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago. Archaeologists have discovered simple stone mechanisms that were used for smashing and grinding various cereals to remove the inedible outer husks and to make the resulting grains into palatable and versatile food. As humans evolved, they mixed the resulting cracked and ground grains with water to create a variety of foods from this gruel to a stiffer porridge. By simply leaving the paste to dry out in the sun, a bread like crust would be formed. This early bread was particularly successful, when wild yeast from the air combined with the flour and water. The early Egyptians were curious about the bread rising and attempted to isolate the yeast, so that they could introduce it directly into their bread. Bakers experimented with leavened doughs and through these experiments Egyptians were the first to uncover the secret of yeast usage. Hence, the future of bread was assured. As travellers took bread making techniques and moved out from Egyptian lands, the art began spreading to all parts of Europe. A key civilisation was the Romans, who took their advanced bread techniques with them around Europe. The Romans preferred whiter bread, which was possible with the milling processes that they had refined. This led to white bread being perceived as the most valuable bread of them all, a preference that seems to have stuck with many people. The Romans also invented the first mechanical dough-mixer, powered by horses and donkeys. Both simple, yet elusive, the art of controlling the various ingredients and developing the skills required to turn grain and water into palatable bread, gave status to individuals and societies for thousands of years. The use of barley and wheat led man to live in communities and made the trade of baker one of the oldest crafts in the world. Before the Industrial Revolution, millers used windmills and watermills, depending on their locations, to turn the machinery that would grind wheat to flour. The Industrial Revolution really moved the process of bread making forwards. The first commercially successful engine did not appear until 1712, but it wasnt until the invention of the Boulton and Watt steam engine in 1786 that the process was advanced and refined. The first mill in London using the steam engines was so large and efficient that in one year in could produce more flour than the rest of the mills in London put together. In conjunction with steam power, a Swiss engineer in 1874 invented a new type of mill. He designed rollers made of steel that operated one above the other. It was called the reduction roller milling system, and these machines soon became accepted all over Europe. Since Egyptian times, yeast has been an essential part of bread making around the world, but yeast was not really understood properly until the 19th century. It was only with the invention of the microscope, followed by the pioneering scientific work of Louis Pasteur in the late 1860s, that yeast was identified as a living organism and agent responsible for dough leavening. Shortly following these discoveries, it became possible to isolate yeast in pure culture form. With this newfound knowledge the stage was set for commercial production of bakers yeast and this began around the turn of the 20th century. Since that time, bakers, scientists and yeast manufacturers have been working to find and produce pure strains of yeast that meet the exacting and specified needs of the baking industry. The basics of any bread dough are flour, water and of course yeast. As soon as these ingredients are stirred together, enzymes in the yeast and the flour cause large starch molecules to break down into simple sugars. The yeast metabolises these simple sugars and exudes a liquid that releases carbon dioxide into the doughs minute cells. As more and more tiny cells are filled, the dough rises and leavened bread is the result.
The first leavening effects were done accidentally.
entailment
id_53
A History of Bread Although bread is not a staple food in all countries around the world, it is in many and in others it is of great importance. As an example, the UK bakery market is worth 3.6 billion annually and is one of the largest markets in the food industry. Total volume at present is approximately just under 4 billion units, the equivalent of almost 11 million loaves and packs sold every single day. There are three principal sectors that make up the UK baking industry. The larger baking companies produce around 80% of bread sold in the UK. In store bakeries within supermarkets produce about 17% and high street retail craft bakers produce the rest. In contrast to the UK, craft bakeries still dominate the market in many mainland European countries. This allows genuine craftspeople to keep alive and indeed develop skills that have been passed on for thousands of years. Recent evidence indicates that humans processed and consumed wild cereal grains as far back as 23,000 years ago. Archaeologists have discovered simple stone mechanisms that were used for smashing and grinding various cereals to remove the inedible outer husks and to make the resulting grains into palatable and versatile food. As humans evolved, they mixed the resulting cracked and ground grains with water to create a variety of foods from this gruel to a stiffer porridge. By simply leaving the paste to dry out in the sun, a bread like crust would be formed. This early bread was particularly successful, when wild yeast from the air combined with the flour and water. The early Egyptians were curious about the bread rising and attempted to isolate the yeast, so that they could introduce it directly into their bread. Bakers experimented with leavened doughs and through these experiments Egyptians were the first to uncover the secret of yeast usage. Hence, the future of bread was assured. As travellers took bread making techniques and moved out from Egyptian lands, the art began spreading to all parts of Europe. A key civilisation was the Romans, who took their advanced bread techniques with them around Europe. The Romans preferred whiter bread, which was possible with the milling processes that they had refined. This led to white bread being perceived as the most valuable bread of them all, a preference that seems to have stuck with many people. The Romans also invented the first mechanical dough-mixer, powered by horses and donkeys. Both simple, yet elusive, the art of controlling the various ingredients and developing the skills required to turn grain and water into palatable bread, gave status to individuals and societies for thousands of years. The use of barley and wheat led man to live in communities and made the trade of baker one of the oldest crafts in the world. Before the Industrial Revolution, millers used windmills and watermills, depending on their locations, to turn the machinery that would grind wheat to flour. The Industrial Revolution really moved the process of bread making forwards. The first commercially successful engine did not appear until 1712, but it wasnt until the invention of the Boulton and Watt steam engine in 1786 that the process was advanced and refined. The first mill in London using the steam engines was so large and efficient that in one year in could produce more flour than the rest of the mills in London put together. In conjunction with steam power, a Swiss engineer in 1874 invented a new type of mill. He designed rollers made of steel that operated one above the other. It was called the reduction roller milling system, and these machines soon became accepted all over Europe. Since Egyptian times, yeast has been an essential part of bread making around the world, but yeast was not really understood properly until the 19th century. It was only with the invention of the microscope, followed by the pioneering scientific work of Louis Pasteur in the late 1860s, that yeast was identified as a living organism and agent responsible for dough leavening. Shortly following these discoveries, it became possible to isolate yeast in pure culture form. With this newfound knowledge the stage was set for commercial production of bakers yeast and this began around the turn of the 20th century. Since that time, bakers, scientists and yeast manufacturers have been working to find and produce pure strains of yeast that meet the exacting and specified needs of the baking industry. The basics of any bread dough are flour, water and of course yeast. As soon as these ingredients are stirred together, enzymes in the yeast and the flour cause large starch molecules to break down into simple sugars. The yeast metabolises these simple sugars and exudes a liquid that releases carbon dioxide into the doughs minute cells. As more and more tiny cells are filled, the dough rises and leavened bread is the result.
Few mainland European countries today favour the craft style bread made by independent bakeries.
contradiction
id_54
A History of Fingerprinting To detectives, the answers lie at the end of our fingers. Fingerprinting offers an accurate and infallible means of personal identification. The ability to identify a person from a mere fingerprint is a powerful tool in the fight against crime. It is the most commonly used forensic evidence, often outperforming other methods of identification. These days, older methods of ink fingerprinting, which could take weeks, have given way to newer, faster techniques like fingerprint laser scanning, but the principles stay the same. No matter which way you collect fingerprint evidence, every single persons print is unique. So, what makes our fingerprints different from our neighbours? A good place to start is to understand what fingerprints are and how they are created. A fingerprint is the arrangement of skin ridges and furrows on the tips of the fingers. This ridged skin develops fully during foetal development, as the skin cells grow in the mothers womb. These ridges are arranged into patterns and remain the same throughout the course of a persons life. Other visible human characteristics, like weight and height, change over time whereas fingerprints do not. The reason why every fingerprint is unique is that when a babys genes combine with environmental influences, such as temperature, it affects the way the ridges on the skin grow. It makes the ridges develop at different rates, buckling and bending into patterns. As a result, no two people end up having the same fingerprints. Even identical twins possess dissimilar fingerprints. It is not easy to map the journey of how the unique quality of the fingerprint came to be discovered. The moment in history it happened is not entirely dear. However, the use of fingerprinting can be traced back to some ancient civilisations, such as Babylon and China, where thumbprints were pressed onto clay tablets to confirm business transactions. Whether people at this time actually realised the full extent of how fingerprints were important for identification purposes is another matter altogether. One cannot be sure if the act was seen as a means to confirm identity or a symbolic gesture to bind a contract, where giving your fingerprint was like giving your word. Despite this uncertainty, there are those who made a significant contribution towards the analysis of fingerprinting. History tells us that a 14th century Persian doctor made an early statement that no two fingerprints are alike. Later, in the 17th century, Italian physician Marcello Malpighi studied the distinguishing shapes of loops and spirals in fingerprints. In his honour, the medical world later named a layer of skin after him. It was, however, an employee for the East India Company, William Herschel, who came to see the true potential of fingerprinting. He took fingerprints from the local people as a form of signature for contracts, in order to avoid fraud. His fascination with fingerprints propelled him to study them for the next twenty years. He developed the theory that fingerprints were unique to an individual and did not change at all over a lifetime. In 1880 Henry Faulds suggested that fingerprints could be used to identify convicted criminals. He wrote to Charles Darwin for advice, and the idea was referred on to Darwins cousin, Sir Francis Galton. Galton eventually published an in-depth study of fingerprint science in 1892. Although the fact that each person has a totally unique fingerprint pattern had been well documented and accepted for a long time, this knowledge was not exploited for criminal identification until the early 20th century. In the past branding, tattooing and maiming had been used to mark the criminal for what he was. In some countries, thieves would have their hands cut off. France branded criminals with the fleur-de-lis symbol. The Romans tattooed mercenary soldiers to stop them from becoming deserters. For many years police agencies in the Western world were reluctant to use fingerprinting, much preferring the popular method of the time, the Bertillon System, where dimensions of certain body parts were recorded to identify a criminal. The turning point was in 1903 when a prisoner by the name of Will West was admitted into Leavenworth Federal Penitentiary. Amazingly, Will had almost the same Bertillon measurements as another prisoner residing at the very same prison, whose name happened to be William West. It was only their fingerprints that could tell them apart. From that point on, fingerprinting became the standard for criminal identification. Fingerprinting was useful in identifying people with a history of crime and who were listed on a database. However, in situations where the perpetrator was not on the database and a crime had no witnesses, the system fell short. Fingerprint chemistry is a new technology that can work alongside traditional fingerprinting to find more clues than ever before. From organic compounds left behind on a print, a scientist can tell if the person is a child, an adult, a mature person or a smoker, and much more. It seems, after all these years, fingers continue to point the way.
The ridges and patterns that make up fingerprints develop before birth.
entailment
id_55
A History of Fingerprinting To detectives, the answers lie at the end of our fingers. Fingerprinting offers an accurate and infallible means of personal identification. The ability to identify a person from a mere fingerprint is a powerful tool in the fight against crime. It is the most commonly used forensic evidence, often outperforming other methods of identification. These days, older methods of ink fingerprinting, which could take weeks, have given way to newer, faster techniques like fingerprint laser scanning, but the principles stay the same. No matter which way you collect fingerprint evidence, every single persons print is unique. So, what makes our fingerprints different from our neighbours? A good place to start is to understand what fingerprints are and how they are created. A fingerprint is the arrangement of skin ridges and furrows on the tips of the fingers. This ridged skin develops fully during foetal development, as the skin cells grow in the mothers womb. These ridges are arranged into patterns and remain the same throughout the course of a persons life. Other visible human characteristics, like weight and height, change over time whereas fingerprints do not. The reason why every fingerprint is unique is that when a babys genes combine with environmental influences, such as temperature, it affects the way the ridges on the skin grow. It makes the ridges develop at different rates, buckling and bending into patterns. As a result, no two people end up having the same fingerprints. Even identical twins possess dissimilar fingerprints. It is not easy to map the journey of how the unique quality of the fingerprint came to be discovered. The moment in history it happened is not entirely dear. However, the use of fingerprinting can be traced back to some ancient civilisations, such as Babylon and China, where thumbprints were pressed onto clay tablets to confirm business transactions. Whether people at this time actually realised the full extent of how fingerprints were important for identification purposes is another matter altogether. One cannot be sure if the act was seen as a means to confirm identity or a symbolic gesture to bind a contract, where giving your fingerprint was like giving your word. Despite this uncertainty, there are those who made a significant contribution towards the analysis of fingerprinting. History tells us that a 14th century Persian doctor made an early statement that no two fingerprints are alike. Later, in the 17th century, Italian physician Marcello Malpighi studied the distinguishing shapes of loops and spirals in fingerprints. In his honour, the medical world later named a layer of skin after him. It was, however, an employee for the East India Company, William Herschel, who came to see the true potential of fingerprinting. He took fingerprints from the local people as a form of signature for contracts, in order to avoid fraud. His fascination with fingerprints propelled him to study them for the next twenty years. He developed the theory that fingerprints were unique to an individual and did not change at all over a lifetime. In 1880 Henry Faulds suggested that fingerprints could be used to identify convicted criminals. He wrote to Charles Darwin for advice, and the idea was referred on to Darwins cousin, Sir Francis Galton. Galton eventually published an in-depth study of fingerprint science in 1892. Although the fact that each person has a totally unique fingerprint pattern had been well documented and accepted for a long time, this knowledge was not exploited for criminal identification until the early 20th century. In the past branding, tattooing and maiming had been used to mark the criminal for what he was. In some countries, thieves would have their hands cut off. France branded criminals with the fleur-de-lis symbol. The Romans tattooed mercenary soldiers to stop them from becoming deserters. For many years police agencies in the Western world were reluctant to use fingerprinting, much preferring the popular method of the time, the Bertillon System, where dimensions of certain body parts were recorded to identify a criminal. The turning point was in 1903 when a prisoner by the name of Will West was admitted into Leavenworth Federal Penitentiary. Amazingly, Will had almost the same Bertillon measurements as another prisoner residing at the very same prison, whose name happened to be William West. It was only their fingerprints that could tell them apart. From that point on, fingerprinting became the standard for criminal identification. Fingerprinting was useful in identifying people with a history of crime and who were listed on a database. However, in situations where the perpetrator was not on the database and a crime had no witnesses, the system fell short. Fingerprint chemistry is a new technology that can work alongside traditional fingerprinting to find more clues than ever before. From organic compounds left behind on a print, a scientist can tell if the person is a child, an adult, a mature person or a smoker, and much more. It seems, after all these years, fingers continue to point the way.
Fingerprinting is the only effective method for identifying criminals.
contradiction
id_56
A History of Fingerprinting To detectives, the answers lie at the end of our fingers. Fingerprinting offers an accurate and infallible means of personal identification. The ability to identify a person from a mere fingerprint is a powerful tool in the fight against crime. It is the most commonly used forensic evidence, often outperforming other methods of identification. These days, older methods of ink fingerprinting, which could take weeks, have given way to newer, faster techniques like fingerprint laser scanning, but the principles stay the same. No matter which way you collect fingerprint evidence, every single persons print is unique. So, what makes our fingerprints different from our neighbours? A good place to start is to understand what fingerprints are and how they are created. A fingerprint is the arrangement of skin ridges and furrows on the tips of the fingers. This ridged skin develops fully during foetal development, as the skin cells grow in the mothers womb. These ridges are arranged into patterns and remain the same throughout the course of a persons life. Other visible human characteristics, like weight and height, change over time whereas fingerprints do not. The reason why every fingerprint is unique is that when a babys genes combine with environmental influences, such as temperature, it affects the way the ridges on the skin grow. It makes the ridges develop at different rates, buckling and bending into patterns. As a result, no two people end up having the same fingerprints. Even identical twins possess dissimilar fingerprints. It is not easy to map the journey of how the unique quality of the fingerprint came to be discovered. The moment in history it happened is not entirely dear. However, the use of fingerprinting can be traced back to some ancient civilisations, such as Babylon and China, where thumbprints were pressed onto clay tablets to confirm business transactions. Whether people at this time actually realised the full extent of how fingerprints were important for identification purposes is another matter altogether. One cannot be sure if the act was seen as a means to confirm identity or a symbolic gesture to bind a contract, where giving your fingerprint was like giving your word. Despite this uncertainty, there are those who made a significant contribution towards the analysis of fingerprinting. History tells us that a 14th century Persian doctor made an early statement that no two fingerprints are alike. Later, in the 17th century, Italian physician Marcello Malpighi studied the distinguishing shapes of loops and spirals in fingerprints. In his honour, the medical world later named a layer of skin after him. It was, however, an employee for the East India Company, William Herschel, who came to see the true potential of fingerprinting. He took fingerprints from the local people as a form of signature for contracts, in order to avoid fraud. His fascination with fingerprints propelled him to study them for the next twenty years. He developed the theory that fingerprints were unique to an individual and did not change at all over a lifetime. In 1880 Henry Faulds suggested that fingerprints could be used to identify convicted criminals. He wrote to Charles Darwin for advice, and the idea was referred on to Darwins cousin, Sir Francis Galton. Galton eventually published an in-depth study of fingerprint science in 1892. Although the fact that each person has a totally unique fingerprint pattern had been well documented and accepted for a long time, this knowledge was not exploited for criminal identification until the early 20th century. In the past branding, tattooing and maiming had been used to mark the criminal for what he was. In some countries, thieves would have their hands cut off. France branded criminals with the fleur-de-lis symbol. The Romans tattooed mercenary soldiers to stop them from becoming deserters. For many years police agencies in the Western world were reluctant to use fingerprinting, much preferring the popular method of the time, the Bertillon System, where dimensions of certain body parts were recorded to identify a criminal. The turning point was in 1903 when a prisoner by the name of Will West was admitted into Leavenworth Federal Penitentiary. Amazingly, Will had almost the same Bertillon measurements as another prisoner residing at the very same prison, whose name happened to be William West. It was only their fingerprints that could tell them apart. From that point on, fingerprinting became the standard for criminal identification. Fingerprinting was useful in identifying people with a history of crime and who were listed on a database. However, in situations where the perpetrator was not on the database and a crime had no witnesses, the system fell short. Fingerprint chemistry is a new technology that can work alongside traditional fingerprinting to find more clues than ever before. From organic compounds left behind on a print, a scientist can tell if the person is a child, an adult, a mature person or a smoker, and much more. It seems, after all these years, fingers continue to point the way.
Malpighi conducted his studies in Italy.
neutral
id_57
A History of Fingerprinting To detectives, the answers lie at the end of our fingers. Fingerprinting offers an accurate and infallible means of personal identification. The ability to identify a person from a mere fingerprint is a powerful tool in the fight against crime. It is the most commonly used forensic evidence, often outperforming other methods of identification. These days, older methods of ink fingerprinting, which could take weeks, have given way to newer, faster techniques like fingerprint laser scanning, but the principles stay the same. No matter which way you collect fingerprint evidence, every single persons print is unique. So, what makes our fingerprints different from our neighbours? A good place to start is to understand what fingerprints are and how they are created. A fingerprint is the arrangement of skin ridges and furrows on the tips of the fingers. This ridged skin develops fully during foetal development, as the skin cells grow in the mothers womb. These ridges are arranged into patterns and remain the same throughout the course of a persons life. Other visible human characteristics, like weight and height, change over time whereas fingerprints do not. The reason why every fingerprint is unique is that when a babys genes combine with environmental influences, such as temperature, it affects the way the ridges on the skin grow. It makes the ridges develop at different rates, buckling and bending into patterns. As a result, no two people end up having the same fingerprints. Even identical twins possess dissimilar fingerprints. It is not easy to map the journey of how the unique quality of the fingerprint came to be discovered. The moment in history it happened is not entirely dear. However, the use of fingerprinting can be traced back to some ancient civilisations, such as Babylon and China, where thumbprints were pressed onto clay tablets to confirm business transactions. Whether people at this time actually realised the full extent of how fingerprints were important for identification purposes is another matter altogether. One cannot be sure if the act was seen as a means to confirm identity or a symbolic gesture to bind a contract, where giving your fingerprint was like giving your word. Despite this uncertainty, there are those who made a significant contribution towards the analysis of fingerprinting. History tells us that a 14th century Persian doctor made an early statement that no two fingerprints are alike. Later, in the 17th century, Italian physician Marcello Malpighi studied the distinguishing shapes of loops and spirals in fingerprints. In his honour, the medical world later named a layer of skin after him. It was, however, an employee for the East India Company, William Herschel, who came to see the true potential of fingerprinting. He took fingerprints from the local people as a form of signature for contracts, in order to avoid fraud. His fascination with fingerprints propelled him to study them for the next twenty years. He developed the theory that fingerprints were unique to an individual and did not change at all over a lifetime. In 1880 Henry Faulds suggested that fingerprints could be used to identify convicted criminals. He wrote to Charles Darwin for advice, and the idea was referred on to Darwins cousin, Sir Francis Galton. Galton eventually published an in-depth study of fingerprint science in 1892. Although the fact that each person has a totally unique fingerprint pattern had been well documented and accepted for a long time, this knowledge was not exploited for criminal identification until the early 20th century. In the past branding, tattooing and maiming had been used to mark the criminal for what he was. In some countries, thieves would have their hands cut off. France branded criminals with the fleur-de-lis symbol. The Romans tattooed mercenary soldiers to stop them from becoming deserters. For many years police agencies in the Western world were reluctant to use fingerprinting, much preferring the popular method of the time, the Bertillon System, where dimensions of certain body parts were recorded to identify a criminal. The turning point was in 1903 when a prisoner by the name of Will West was admitted into Leavenworth Federal Penitentiary. Amazingly, Will had almost the same Bertillon measurements as another prisoner residing at the very same prison, whose name happened to be William West. It was only their fingerprints that could tell them apart. From that point on, fingerprinting became the standard for criminal identification. Fingerprinting was useful in identifying people with a history of crime and who were listed on a database. However, in situations where the perpetrator was not on the database and a crime had no witnesses, the system fell short. Fingerprint chemistry is a new technology that can work alongside traditional fingerprinting to find more clues than ever before. From organic compounds left behind on a print, a scientist can tell if the person is a child, an adult, a mature person or a smoker, and much more. It seems, after all these years, fingers continue to point the way.
Roman soldiers were tattooed to prevent them from committing violent crimes.
contradiction
id_58
A History of Fingerprinting To detectives, the answers lie at the end of our fingers. Fingerprinting offers an accurate and infallible means of personal identification. The ability to identify a person from a mere fingerprint is a powerful tool in the fight against crime. It is the most commonly used forensic evidence, often outperforming other methods of identification. These days, older methods of ink fingerprinting, which could take weeks, have given way to newer, faster techniques like fingerprint laser scanning, but the principles stay the same. No matter which way you collect fingerprint evidence, every single persons print is unique. So, what makes our fingerprints different from our neighbours? A good place to start is to understand what fingerprints are and how they are created. A fingerprint is the arrangement of skin ridges and furrows on the tips of the fingers. This ridged skin develops fully during foetal development, as the skin cells grow in the mothers womb. These ridges are arranged into patterns and remain the same throughout the course of a persons life. Other visible human characteristics, like weight and height, change over time whereas fingerprints do not. The reason why every fingerprint is unique is that when a babys genes combine with environmental influences, such as temperature, it affects the way the ridges on the skin grow. It makes the ridges develop at different rates, buckling and bending into patterns. As a result, no two people end up having the same fingerprints. Even identical twins possess dissimilar fingerprints. It is not easy to map the journey of how the unique quality of the fingerprint came to be discovered. The moment in history it happened is not entirely dear. However, the use of fingerprinting can be traced back to some ancient civilisations, such as Babylon and China, where thumbprints were pressed onto clay tablets to confirm business transactions. Whether people at this time actually realised the full extent of how fingerprints were important for identification purposes is another matter altogether. One cannot be sure if the act was seen as a means to confirm identity or a symbolic gesture to bind a contract, where giving your fingerprint was like giving your word. Despite this uncertainty, there are those who made a significant contribution towards the analysis of fingerprinting. History tells us that a 14th century Persian doctor made an early statement that no two fingerprints are alike. Later, in the 17th century, Italian physician Marcello Malpighi studied the distinguishing shapes of loops and spirals in fingerprints. In his honour, the medical world later named a layer of skin after him. It was, however, an employee for the East India Company, William Herschel, who came to see the true potential of fingerprinting. He took fingerprints from the local people as a form of signature for contracts, in order to avoid fraud. His fascination with fingerprints propelled him to study them for the next twenty years. He developed the theory that fingerprints were unique to an individual and did not change at all over a lifetime. In 1880 Henry Faulds suggested that fingerprints could be used to identify convicted criminals. He wrote to Charles Darwin for advice, and the idea was referred on to Darwins cousin, Sir Francis Galton. Galton eventually published an in-depth study of fingerprint science in 1892. Although the fact that each person has a totally unique fingerprint pattern had been well documented and accepted for a long time, this knowledge was not exploited for criminal identification until the early 20th century. In the past branding, tattooing and maiming had been used to mark the criminal for what he was. In some countries, thieves would have their hands cut off. France branded criminals with the fleur-de-lis symbol. The Romans tattooed mercenary soldiers to stop them from becoming deserters. For many years police agencies in the Western world were reluctant to use fingerprinting, much preferring the popular method of the time, the Bertillon System, where dimensions of certain body parts were recorded to identify a criminal. The turning point was in 1903 when a prisoner by the name of Will West was admitted into Leavenworth Federal Penitentiary. Amazingly, Will had almost the same Bertillon measurements as another prisoner residing at the very same prison, whose name happened to be William West. It was only their fingerprints that could tell them apart. From that point on, fingerprinting became the standard for criminal identification. Fingerprinting was useful in identifying people with a history of crime and who were listed on a database. However, in situations where the perpetrator was not on the database and a crime had no witnesses, the system fell short. Fingerprint chemistry is a new technology that can work alongside traditional fingerprinting to find more clues than ever before. From organic compounds left behind on a print, a scientist can tell if the person is a child, an adult, a mature person or a smoker, and much more. It seems, after all these years, fingers continue to point the way.
Fingerprint chemistry can identify if a fingerprint belongs to an elderly person.
entailment
id_59
A History of the Watch A The earliest dated evidence of a timepiece is a fragment of a Chinese sundial from circa 1500 BC. Which suggests there were rudimentary attempts to keep time during this period. Later wealth Romans were known to carry around pocket-sized sundials, though these cannot be regarded as predecessors of the modern watch. It would take developments in measuring hours without the sun such as water clocks, sand glasses, and candles uniformly burning away the hours to begin to measure time in the increments understood today. All of these ways for tracking time were utilized in the East, particularly in China during the Middle Ages. However, despite its more advanced culture, it appears that China had less use for the kind of accurate timekeeping that came to rule the West, because of their unique understanding of the Earths rhythm and their different relationship with nature. B The first mechanical clock probably emerged out of monasteries, developed by monks as alarm mechanism to ring the bells according to the regular and regimented hours of their religious rhythms. Once the twenty-four equal-hour day was developed the chiming of the bells gradually fell In line with the clock. Early clocks both large tower as well as turret clocks and the smaller models that they were based on, were propelled by weight mechanisms. By the fifteenth century, however, the mainspring was developed, employing the stored power of a tightly coiled spring. This was soon followed by a device called the fusee which equalled the momentum of a spring as it uncoiled. Smaller versions of this mechanism led to the invention of the watch. C Early watches were bulky and ornate, like the early spring-powered clocks, kept time with only an hour hand, though still rather inaccurately due to errors from friction. These early watches were made in many places around the world, but the earliest manufacturing dominance in the watch industry was by the British. The British factory systems emerging out of the industrial revolution and the development of the railroad combined to give birth to a strong and profitable business. The small scale manufacture of watches in the early 18th century was a dual system of production that combined craftspeople in the metalworking industry putting out product from their workshops to be acquired and assembled in factory systems. The strategy, however, proved to be short-lived in light of more integrated approaches to manufacturing. This, poor transportation and communication among participants in the British watch industry led to them losing their market dominance. D The defining factor of the 20th century technological evolution in watchmaking was precision. Watches have always evolved with respect to trends in fashion, but the mechanics of the standard spring-powered device itself had undergone few in changes in 300 years, until the advent of electronics in the middle of the 20th century. Since precision in watchmaking was the driving force behind innovation, it is easy to understand how an accurate watch that could be made inexpensively would come to dominate the market. Gradually, improvements in battery technology, the miniaturisation of batteries, additional components combined with quartz technology and integrated circuit technology combined to produce the most accurate timepieces ever assembled. E The Japanese correctly identified quartz analog as the future of watchmaking and were particularly adept at developing it. Building upon early knowledge gained in part from American industries, they developed large vertically integrated factories for their watchmaking companies. These firms quickly controlled their protected domestic market and build solid foundations in manufacturing based in Hong Kong that have helped them prosper until they dominated internationally. All the major watch producers utilized Hong Kong as a cheap source of labor for assembling products as well as purchasing components for watches, but the Japanese were the best at controlling their distribution channels. F Watches are not limited to mere time keeping and the measurement of seconds, minutes and hours are potentially only one function of a watch. Anything else has come to be called complications in watchmaking. As an example, perpetual calendars have been built into watches for more than two centuries. Such calendars have included everything from days and months to phases of the moon and adjustments for leap years. Modern technology, especially inexpensive batteries and microchips allow for such minor complications in even cheaper watches. Meanwhile, time continues to be measured in increasingly precise manner, and so the evolution of the personal timepiece seems destined to continue into eternity.
Friction was used in early watches to help with accuracy.
contradiction
id_60
A History of the Watch A The earliest dated evidence of a timepiece is a fragment of a Chinese sundial from circa 1500 BC. Which suggests there were rudimentary attempts to keep time during this period. Later wealth Romans were known to carry around pocket-sized sundials, though these cannot be regarded as predecessors of the modern watch. It would take developments in measuring hours without the sun such as water clocks, sand glasses, and candles uniformly burning away the hours to begin to measure time in the increments understood today. All of these ways for tracking time were utilized in the East, particularly in China during the Middle Ages. However, despite its more advanced culture, it appears that China had less use for the kind of accurate timekeeping that came to rule the West, because of their unique understanding of the Earths rhythm and their different relationship with nature. B The first mechanical clock probably emerged out of monasteries, developed by monks as alarm mechanism to ring the bells according to the regular and regimented hours of their religious rhythms. Once the twenty-four equal-hour day was developed the chiming of the bells gradually fell In line with the clock. Early clocks both large tower as well as turret clocks and the smaller models that they were based on, were propelled by weight mechanisms. By the fifteenth century, however, the mainspring was developed, employing the stored power of a tightly coiled spring. This was soon followed by a device called the fusee which equalled the momentum of a spring as it uncoiled. Smaller versions of this mechanism led to the invention of the watch. C Early watches were bulky and ornate, like the early spring-powered clocks, kept time with only an hour hand, though still rather inaccurately due to errors from friction. These early watches were made in many places around the world, but the earliest manufacturing dominance in the watch industry was by the British. The British factory systems emerging out of the industrial revolution and the development of the railroad combined to give birth to a strong and profitable business. The small scale manufacture of watches in the early 18th century was a dual system of production that combined craftspeople in the metalworking industry putting out product from their workshops to be acquired and assembled in factory systems. The strategy, however, proved to be short-lived in light of more integrated approaches to manufacturing. This, poor transportation and communication among participants in the British watch industry led to them losing their market dominance. D The defining factor of the 20th century technological evolution in watchmaking was precision. Watches have always evolved with respect to trends in fashion, but the mechanics of the standard spring-powered device itself had undergone few in changes in 300 years, until the advent of electronics in the middle of the 20th century. Since precision in watchmaking was the driving force behind innovation, it is easy to understand how an accurate watch that could be made inexpensively would come to dominate the market. Gradually, improvements in battery technology, the miniaturisation of batteries, additional components combined with quartz technology and integrated circuit technology combined to produce the most accurate timepieces ever assembled. E The Japanese correctly identified quartz analog as the future of watchmaking and were particularly adept at developing it. Building upon early knowledge gained in part from American industries, they developed large vertically integrated factories for their watchmaking companies. These firms quickly controlled their protected domestic market and build solid foundations in manufacturing based in Hong Kong that have helped them prosper until they dominated internationally. All the major watch producers utilized Hong Kong as a cheap source of labor for assembling products as well as purchasing components for watches, but the Japanese were the best at controlling their distribution channels. F Watches are not limited to mere time keeping and the measurement of seconds, minutes and hours are potentially only one function of a watch. Anything else has come to be called complications in watchmaking. As an example, perpetual calendars have been built into watches for more than two centuries. Such calendars have included everything from days and months to phases of the moon and adjustments for leap years. Modern technology, especially inexpensive batteries and microchips allow for such minor complications in even cheaper watches. Meanwhile, time continues to be measured in increasingly precise manner, and so the evolution of the personal timepiece seems destined to continue into eternity.
China in the Middle Ages did not share the Wests obsession with precise time.
entailment
id_61
A History of the Watch A The earliest dated evidence of a timepiece is a fragment of a Chinese sundial from circa 1500 BC. Which suggests there were rudimentary attempts to keep time during this period. Later wealth Romans were known to carry around pocket-sized sundials, though these cannot be regarded as predecessors of the modern watch. It would take developments in measuring hours without the sun such as water clocks, sand glasses, and candles uniformly burning away the hours to begin to measure time in the increments understood today. All of these ways for tracking time were utilized in the East, particularly in China during the Middle Ages. However, despite its more advanced culture, it appears that China had less use for the kind of accurate timekeeping that came to rule the West, because of their unique understanding of the Earths rhythm and their different relationship with nature. B The first mechanical clock probably emerged out of monasteries, developed by monks as alarm mechanism to ring the bells according to the regular and regimented hours of their religious rhythms. Once the twenty-four equal-hour day was developed the chiming of the bells gradually fell In line with the clock. Early clocks both large tower as well as turret clocks and the smaller models that they were based on, were propelled by weight mechanisms. By the fifteenth century, however, the mainspring was developed, employing the stored power of a tightly coiled spring. This was soon followed by a device called the fusee which equalled the momentum of a spring as it uncoiled. Smaller versions of this mechanism led to the invention of the watch. C Early watches were bulky and ornate, like the early spring-powered clocks, kept time with only an hour hand, though still rather inaccurately due to errors from friction. These early watches were made in many places around the world, but the earliest manufacturing dominance in the watch industry was by the British. The British factory systems emerging out of the industrial revolution and the development of the railroad combined to give birth to a strong and profitable business. The small scale manufacture of watches in the early 18th century was a dual system of production that combined craftspeople in the metalworking industry putting out product from their workshops to be acquired and assembled in factory systems. The strategy, however, proved to be short-lived in light of more integrated approaches to manufacturing. This, poor transportation and communication among participants in the British watch industry led to them losing their market dominance. D The defining factor of the 20th century technological evolution in watchmaking was precision. Watches have always evolved with respect to trends in fashion, but the mechanics of the standard spring-powered device itself had undergone few in changes in 300 years, until the advent of electronics in the middle of the 20th century. Since precision in watchmaking was the driving force behind innovation, it is easy to understand how an accurate watch that could be made inexpensively would come to dominate the market. Gradually, improvements in battery technology, the miniaturisation of batteries, additional components combined with quartz technology and integrated circuit technology combined to produce the most accurate timepieces ever assembled. E The Japanese correctly identified quartz analog as the future of watchmaking and were particularly adept at developing it. Building upon early knowledge gained in part from American industries, they developed large vertically integrated factories for their watchmaking companies. These firms quickly controlled their protected domestic market and build solid foundations in manufacturing based in Hong Kong that have helped them prosper until they dominated internationally. All the major watch producers utilized Hong Kong as a cheap source of labor for assembling products as well as purchasing components for watches, but the Japanese were the best at controlling their distribution channels. F Watches are not limited to mere time keeping and the measurement of seconds, minutes and hours are potentially only one function of a watch. Anything else has come to be called complications in watchmaking. As an example, perpetual calendars have been built into watches for more than two centuries. Such calendars have included everything from days and months to phases of the moon and adjustments for leap years. Modern technology, especially inexpensive batteries and microchips allow for such minor complications in even cheaper watches. Meanwhile, time continues to be measured in increasingly precise manner, and so the evolution of the personal timepiece seems destined to continue into eternity.
The early British watch industry exported their product around the world.
neutral
id_62
A History of the Watch A The earliest dated evidence of a timepiece is a fragment of a Chinese sundial from circa 1500 BC. Which suggests there were rudimentary attempts to keep time during this period. Later wealth Romans were known to carry around pocket-sized sundials, though these cannot be regarded as predecessors of the modern watch. It would take developments in measuring hours without the sun such as water clocks, sand glasses, and candles uniformly burning away the hours to begin to measure time in the increments understood today. All of these ways for tracking time were utilized in the East, particularly in China during the Middle Ages. However, despite its more advanced culture, it appears that China had less use for the kind of accurate timekeeping that came to rule the West, because of their unique understanding of the Earths rhythm and their different relationship with nature. B The first mechanical clock probably emerged out of monasteries, developed by monks as alarm mechanism to ring the bells according to the regular and regimented hours of their religious rhythms. Once the twenty-four equal-hour day was developed the chiming of the bells gradually fell In line with the clock. Early clocks both large tower as well as turret clocks and the smaller models that they were based on, were propelled by weight mechanisms. By the fifteenth century, however, the mainspring was developed, employing the stored power of a tightly coiled spring. This was soon followed by a device called the fusee which equalled the momentum of a spring as it uncoiled. Smaller versions of this mechanism led to the invention of the watch. C Early watches were bulky and ornate, like the early spring-powered clocks, kept time with only an hour hand, though still rather inaccurately due to errors from friction. These early watches were made in many places around the world, but the earliest manufacturing dominance in the watch industry was by the British. The British factory systems emerging out of the industrial revolution and the development of the railroad combined to give birth to a strong and profitable business. The small scale manufacture of watches in the early 18th century was a dual system of production that combined craftspeople in the metalworking industry putting out product from their workshops to be acquired and assembled in factory systems. The strategy, however, proved to be short-lived in light of more integrated approaches to manufacturing. This, poor transportation and communication among participants in the British watch industry led to them losing their market dominance. D The defining factor of the 20th century technological evolution in watchmaking was precision. Watches have always evolved with respect to trends in fashion, but the mechanics of the standard spring-powered device itself had undergone few in changes in 300 years, until the advent of electronics in the middle of the 20th century. Since precision in watchmaking was the driving force behind innovation, it is easy to understand how an accurate watch that could be made inexpensively would come to dominate the market. Gradually, improvements in battery technology, the miniaturisation of batteries, additional components combined with quartz technology and integrated circuit technology combined to produce the most accurate timepieces ever assembled. E The Japanese correctly identified quartz analog as the future of watchmaking and were particularly adept at developing it. Building upon early knowledge gained in part from American industries, they developed large vertically integrated factories for their watchmaking companies. These firms quickly controlled their protected domestic market and build solid foundations in manufacturing based in Hong Kong that have helped them prosper until they dominated internationally. All the major watch producers utilized Hong Kong as a cheap source of labor for assembling products as well as purchasing components for watches, but the Japanese were the best at controlling their distribution channels. F Watches are not limited to mere time keeping and the measurement of seconds, minutes and hours are potentially only one function of a watch. Anything else has come to be called complications in watchmaking. As an example, perpetual calendars have been built into watches for more than two centuries. Such calendars have included everything from days and months to phases of the moon and adjustments for leap years. Modern technology, especially inexpensive batteries and microchips allow for such minor complications in even cheaper watches. Meanwhile, time continues to be measured in increasingly precise manner, and so the evolution of the personal timepiece seems destined to continue into eternity.
Religious worship times probably led to the development of the first mechanical clock.
entailment
id_63
A LIBRARY AT YOUR FINGERTIPS A few years ago, at the height of the dotcom boom, it was widely assumed that a publishing revolution, in which the printed word would be supplanted by the computer screen, was just around the corner. It wasnt: for many, there is still little to match the joy of cracking the spine of a good book and settling down for an hour or two of reading. A recent flurry of activity by big technology companies including Google, Amazon, Microsoft and Yahoo! suggests that the dream of bringing books online is still very much alive. The digitising of thousands of volumes of print is not without controversy. On Thursday, November 3, Google, the worlds most popular search engine, posted a first instalment of books on Google Print, an initiative first mooted a year ago. This collaborative effort between Google and several of the worlds leading research libraries aims to make many thousands of books available to be searched and read online free of charge. Although the books included so far are not covered by copyright, the plan has attracted the ire of publishers. Five large book firms are suing Google for violating copyright on material that it has scanned and, although out of print, is still protected by law. Google has said that it will only publish short extracts from material under copyright unless given express permission to publish more, but publishers are unconvinced. Ironically, many publishers are collaborating with Google on a separate venture, Google Print Publisher, which aims to give readers an online taste of books that are commercially available. The searchable collection of extracts and book information is intended to tempt readers to buy the complete books online or in print form. Not to be outdone, Amazon, the worlds largest online retailer, has unveiled plans for its own foray into the mass e-book market. The firm, which began ten years ago as an online book retailer, now sells a vast array of goods. No doubt piqued that Google, a relative newcomer, should impinge upon its central territory, Amazon revealed on Thursday that it would introduce two new services. Amazon Pages will allow customers to search for key terms in selected books and then buy and read online whatever part they wish, from individual pages to chapters or complete works. Amazon Upgrade will give customers online access to books they have already purchased as hard copies. Customers are likely to have to pay around five cents a page, with the bulk going to the publisher. Microsoft, too, has joined the online-book bandwagon. At the end of October, the software giant said it would spend around $200 million to digitise texts, starting with 150,000 that are in the public domain, to avoid legal problems. It will do so in collaboration with the Open Content Alliance, a consortium of libraries and universities. (Yahoo! has pledged to make 18,000 books available online in conjunction with the same organisation. ) On Thursday, coincidentally the same day as Google and Amazon announced their initiatives, Microsoft released details of a deal with the British Library, the countrys main reference library, to digitise some 25 million pages; these will be made available through MSN Book Search, which will be launched next year. These companies are hoping for a return to the levels of interest in e-books seen when Stephen King, a best-selling horror writer, published Riding the Bullet exclusively on the Internet in 2000. Half a million copies were downloaded in the first 48 hours after publication. This proved to be a high-water mark rather than a taste of things to come. While buyers were reluctant to sit in front of a computer screen to read the latest novels, dedicated e-book reading gadgets failed to catch on. Barnes and Noble, a leading American bookshop chain, began selling e-books with fanfare in 2000 but quietly pulled the plug in 2003 as interest faded. The market for e-books is growing again, though from a tiny base. According to the International Digital Publishing Forum, which collates figures from many of the worlds top publishers, in the third quarter of 2004, worldwide sales were 25% higher than the year before. Unfortunately, this only amounted to a paltry $3.2 million split between 23 publishers in an industry that made sales worth over $100 billion that year. Both retailers and publishers reckon they will eventually be able to persuade consumers to do a lot more of their reading on the web. Some even hope they can become to online books what Apples iTunes is to online music. There are crucial differences between downloading fiction and downloading funk. Online music was driven from the bottom up: illegal filesharing services became wildly popular, and legal firms later took over when the pirates were forced (by a wave of lawsuits) to retreat; the legal providers are confident that more and more consumers will pay small sums for music rather than remain beyond the law. The iPod music player and its like have proved a fashionable and popular new way to listen to songs. The book world has no equivalent. So the commercial prospects for sellers of online books do not yet look very bright. They may get a lift from some novel innovations. The ability to download mere parts of books could help, for instance: sections of manuals, textbooks or cookery books may tempt some customers; students may wish to download the relevant sections of course books; or readers may want a taste of a book that they subsequently buy in hard copy. The ability to download reading matter onto increasingly ubiquitous hand-held electronic devices and 3G phones may further encourage uptake. In Japan, the value of e-books (mainly manga comic books) delivered to mobile phones has jumped, though it will be worth only around 6 billion ($51 million) in 2005, according to estimates
The ability to sample a book online before buying it might help sales.
entailment
id_64
A LIBRARY AT YOUR FINGERTIPS A few years ago, at the height of the dotcom boom, it was widely assumed that a publishing revolution, in which the printed word would be supplanted by the computer screen, was just around the corner. It wasnt: for many, there is still little to match the joy of cracking the spine of a good book and settling down for an hour or two of reading. A recent flurry of activity by big technology companies including Google, Amazon, Microsoft and Yahoo! suggests that the dream of bringing books online is still very much alive. The digitising of thousands of volumes of print is not without controversy. On Thursday, November 3, Google, the worlds most popular search engine, posted a first instalment of books on Google Print, an initiative first mooted a year ago. This collaborative effort between Google and several of the worlds leading research libraries aims to make many thousands of books available to be searched and read online free of charge. Although the books included so far are not covered by copyright, the plan has attracted the ire of publishers. Five large book firms are suing Google for violating copyright on material that it has scanned and, although out of print, is still protected by law. Google has said that it will only publish short extracts from material under copyright unless given express permission to publish more, but publishers are unconvinced. Ironically, many publishers are collaborating with Google on a separate venture, Google Print Publisher, which aims to give readers an online taste of books that are commercially available. The searchable collection of extracts and book information is intended to tempt readers to buy the complete books online or in print form. Not to be outdone, Amazon, the worlds largest online retailer, has unveiled plans for its own foray into the mass e-book market. The firm, which began ten years ago as an online book retailer, now sells a vast array of goods. No doubt piqued that Google, a relative newcomer, should impinge upon its central territory, Amazon revealed on Thursday that it would introduce two new services. Amazon Pages will allow customers to search for key terms in selected books and then buy and read online whatever part they wish, from individual pages to chapters or complete works. Amazon Upgrade will give customers online access to books they have already purchased as hard copies. Customers are likely to have to pay around five cents a page, with the bulk going to the publisher. Microsoft, too, has joined the online-book bandwagon. At the end of October, the software giant said it would spend around $200 million to digitise texts, starting with 150,000 that are in the public domain, to avoid legal problems. It will do so in collaboration with the Open Content Alliance, a consortium of libraries and universities. (Yahoo! has pledged to make 18,000 books available online in conjunction with the same organisation. ) On Thursday, coincidentally the same day as Google and Amazon announced their initiatives, Microsoft released details of a deal with the British Library, the countrys main reference library, to digitise some 25 million pages; these will be made available through MSN Book Search, which will be launched next year. These companies are hoping for a return to the levels of interest in e-books seen when Stephen King, a best-selling horror writer, published Riding the Bullet exclusively on the Internet in 2000. Half a million copies were downloaded in the first 48 hours after publication. This proved to be a high-water mark rather than a taste of things to come. While buyers were reluctant to sit in front of a computer screen to read the latest novels, dedicated e-book reading gadgets failed to catch on. Barnes and Noble, a leading American bookshop chain, began selling e-books with fanfare in 2000 but quietly pulled the plug in 2003 as interest faded. The market for e-books is growing again, though from a tiny base. According to the International Digital Publishing Forum, which collates figures from many of the worlds top publishers, in the third quarter of 2004, worldwide sales were 25% higher than the year before. Unfortunately, this only amounted to a paltry $3.2 million split between 23 publishers in an industry that made sales worth over $100 billion that year. Both retailers and publishers reckon they will eventually be able to persuade consumers to do a lot more of their reading on the web. Some even hope they can become to online books what Apples iTunes is to online music. There are crucial differences between downloading fiction and downloading funk. Online music was driven from the bottom up: illegal filesharing services became wildly popular, and legal firms later took over when the pirates were forced (by a wave of lawsuits) to retreat; the legal providers are confident that more and more consumers will pay small sums for music rather than remain beyond the law. The iPod music player and its like have proved a fashionable and popular new way to listen to songs. The book world has no equivalent. So the commercial prospects for sellers of online books do not yet look very bright. They may get a lift from some novel innovations. The ability to download mere parts of books could help, for instance: sections of manuals, textbooks or cookery books may tempt some customers; students may wish to download the relevant sections of course books; or readers may want a taste of a book that they subsequently buy in hard copy. The ability to download reading matter onto increasingly ubiquitous hand-held electronic devices and 3G phones may further encourage uptake. In Japan, the value of e-books (mainly manga comic books) delivered to mobile phones has jumped, though it will be worth only around 6 billion ($51 million) in 2005, according to estimates
Barnes and Noble published Riding the Bullet online.
neutral
id_65
A LIBRARY AT YOUR FINGERTIPS A few years ago, at the height of the dotcom boom, it was widely assumed that a publishing revolution, in which the printed word would be supplanted by the computer screen, was just around the corner. It wasnt: for many, there is still little to match the joy of cracking the spine of a good book and settling down for an hour or two of reading. A recent flurry of activity by big technology companies including Google, Amazon, Microsoft and Yahoo! suggests that the dream of bringing books online is still very much alive. The digitising of thousands of volumes of print is not without controversy. On Thursday, November 3, Google, the worlds most popular search engine, posted a first instalment of books on Google Print, an initiative first mooted a year ago. This collaborative effort between Google and several of the worlds leading research libraries aims to make many thousands of books available to be searched and read online free of charge. Although the books included so far are not covered by copyright, the plan has attracted the ire of publishers. Five large book firms are suing Google for violating copyright on material that it has scanned and, although out of print, is still protected by law. Google has said that it will only publish short extracts from material under copyright unless given express permission to publish more, but publishers are unconvinced. Ironically, many publishers are collaborating with Google on a separate venture, Google Print Publisher, which aims to give readers an online taste of books that are commercially available. The searchable collection of extracts and book information is intended to tempt readers to buy the complete books online or in print form. Not to be outdone, Amazon, the worlds largest online retailer, has unveiled plans for its own foray into the mass e-book market. The firm, which began ten years ago as an online book retailer, now sells a vast array of goods. No doubt piqued that Google, a relative newcomer, should impinge upon its central territory, Amazon revealed on Thursday that it would introduce two new services. Amazon Pages will allow customers to search for key terms in selected books and then buy and read online whatever part they wish, from individual pages to chapters or complete works. Amazon Upgrade will give customers online access to books they have already purchased as hard copies. Customers are likely to have to pay around five cents a page, with the bulk going to the publisher. Microsoft, too, has joined the online-book bandwagon. At the end of October, the software giant said it would spend around $200 million to digitise texts, starting with 150,000 that are in the public domain, to avoid legal problems. It will do so in collaboration with the Open Content Alliance, a consortium of libraries and universities. (Yahoo! has pledged to make 18,000 books available online in conjunction with the same organisation. ) On Thursday, coincidentally the same day as Google and Amazon announced their initiatives, Microsoft released details of a deal with the British Library, the countrys main reference library, to digitise some 25 million pages; these will be made available through MSN Book Search, which will be launched next year. These companies are hoping for a return to the levels of interest in e-books seen when Stephen King, a best-selling horror writer, published Riding the Bullet exclusively on the Internet in 2000. Half a million copies were downloaded in the first 48 hours after publication. This proved to be a high-water mark rather than a taste of things to come. While buyers were reluctant to sit in front of a computer screen to read the latest novels, dedicated e-book reading gadgets failed to catch on. Barnes and Noble, a leading American bookshop chain, began selling e-books with fanfare in 2000 but quietly pulled the plug in 2003 as interest faded. The market for e-books is growing again, though from a tiny base. According to the International Digital Publishing Forum, which collates figures from many of the worlds top publishers, in the third quarter of 2004, worldwide sales were 25% higher than the year before. Unfortunately, this only amounted to a paltry $3.2 million split between 23 publishers in an industry that made sales worth over $100 billion that year. Both retailers and publishers reckon they will eventually be able to persuade consumers to do a lot more of their reading on the web. Some even hope they can become to online books what Apples iTunes is to online music. There are crucial differences between downloading fiction and downloading funk. Online music was driven from the bottom up: illegal filesharing services became wildly popular, and legal firms later took over when the pirates were forced (by a wave of lawsuits) to retreat; the legal providers are confident that more and more consumers will pay small sums for music rather than remain beyond the law. The iPod music player and its like have proved a fashionable and popular new way to listen to songs. The book world has no equivalent. So the commercial prospects for sellers of online books do not yet look very bright. They may get a lift from some novel innovations. The ability to download mere parts of books could help, for instance: sections of manuals, textbooks or cookery books may tempt some customers; students may wish to download the relevant sections of course books; or readers may want a taste of a book that they subsequently buy in hard copy. The ability to download reading matter onto increasingly ubiquitous hand-held electronic devices and 3G phones may further encourage uptake. In Japan, the value of e-books (mainly manga comic books) delivered to mobile phones has jumped, though it will be worth only around 6 billion ($51 million) in 2005, according to estimates
Books that are out of print are not covered by copyright law.
contradiction
id_66
A LIBRARY AT YOUR FINGERTIPS A few years ago, at the height of the dotcom boom, it was widely assumed that a publishing revolution, in which the printed word would be supplanted by the computer screen, was just around the corner. It wasnt: for many, there is still little to match the joy of cracking the spine of a good book and settling down for an hour or two of reading. A recent flurry of activity by big technology companies including Google, Amazon, Microsoft and Yahoo! suggests that the dream of bringing books online is still very much alive. The digitising of thousands of volumes of print is not without controversy. On Thursday, November 3, Google, the worlds most popular search engine, posted a first instalment of books on Google Print, an initiative first mooted a year ago. This collaborative effort between Google and several of the worlds leading research libraries aims to make many thousands of books available to be searched and read online free of charge. Although the books included so far are not covered by copyright, the plan has attracted the ire of publishers. Five large book firms are suing Google for violating copyright on material that it has scanned and, although out of print, is still protected by law. Google has said that it will only publish short extracts from material under copyright unless given express permission to publish more, but publishers are unconvinced. Ironically, many publishers are collaborating with Google on a separate venture, Google Print Publisher, which aims to give readers an online taste of books that are commercially available. The searchable collection of extracts and book information is intended to tempt readers to buy the complete books online or in print form. Not to be outdone, Amazon, the worlds largest online retailer, has unveiled plans for its own foray into the mass e-book market. The firm, which began ten years ago as an online book retailer, now sells a vast array of goods. No doubt piqued that Google, a relative newcomer, should impinge upon its central territory, Amazon revealed on Thursday that it would introduce two new services. Amazon Pages will allow customers to search for key terms in selected books and then buy and read online whatever part they wish, from individual pages to chapters or complete works. Amazon Upgrade will give customers online access to books they have already purchased as hard copies. Customers are likely to have to pay around five cents a page, with the bulk going to the publisher. Microsoft, too, has joined the online-book bandwagon. At the end of October, the software giant said it would spend around $200 million to digitise texts, starting with 150,000 that are in the public domain, to avoid legal problems. It will do so in collaboration with the Open Content Alliance, a consortium of libraries and universities. (Yahoo! has pledged to make 18,000 books available online in conjunction with the same organisation. ) On Thursday, coincidentally the same day as Google and Amazon announced their initiatives, Microsoft released details of a deal with the British Library, the countrys main reference library, to digitise some 25 million pages; these will be made available through MSN Book Search, which will be launched next year. These companies are hoping for a return to the levels of interest in e-books seen when Stephen King, a best-selling horror writer, published Riding the Bullet exclusively on the Internet in 2000. Half a million copies were downloaded in the first 48 hours after publication. This proved to be a high-water mark rather than a taste of things to come. While buyers were reluctant to sit in front of a computer screen to read the latest novels, dedicated e-book reading gadgets failed to catch on. Barnes and Noble, a leading American bookshop chain, began selling e-books with fanfare in 2000 but quietly pulled the plug in 2003 as interest faded. The market for e-books is growing again, though from a tiny base. According to the International Digital Publishing Forum, which collates figures from many of the worlds top publishers, in the third quarter of 2004, worldwide sales were 25% higher than the year before. Unfortunately, this only amounted to a paltry $3.2 million split between 23 publishers in an industry that made sales worth over $100 billion that year. Both retailers and publishers reckon they will eventually be able to persuade consumers to do a lot more of their reading on the web. Some even hope they can become to online books what Apples iTunes is to online music. There are crucial differences between downloading fiction and downloading funk. Online music was driven from the bottom up: illegal filesharing services became wildly popular, and legal firms later took over when the pirates were forced (by a wave of lawsuits) to retreat; the legal providers are confident that more and more consumers will pay small sums for music rather than remain beyond the law. The iPod music player and its like have proved a fashionable and popular new way to listen to songs. The book world has no equivalent. So the commercial prospects for sellers of online books do not yet look very bright. They may get a lift from some novel innovations. The ability to download mere parts of books could help, for instance: sections of manuals, textbooks or cookery books may tempt some customers; students may wish to download the relevant sections of course books; or readers may want a taste of a book that they subsequently buy in hard copy. The ability to download reading matter onto increasingly ubiquitous hand-held electronic devices and 3G phones may further encourage uptake. In Japan, the value of e-books (mainly manga comic books) delivered to mobile phones has jumped, though it will be worth only around 6 billion ($51 million) in 2005, according to estimates
Microsoft signed a deal with the British Library on the same day as Google and Amazon made their announcements.
contradiction
id_67
A LIBRARY AT YOUR FINGERTIPS A few years ago, at the height of the dotcom boom, it was widely assumed that a publishing revolution, in which the printed word would be supplanted by the computer screen, was just around the corner. It wasnt: for many, there is still little to match the joy of cracking the spine of a good book and settling down for an hour or two of reading. A recent flurry of activity by big technology companies including Google, Amazon, Microsoft and Yahoo! suggests that the dream of bringing books online is still very much alive. The digitising of thousands of volumes of print is not without controversy. On Thursday, November 3, Google, the worlds most popular search engine, posted a first instalment of books on Google Print, an initiative first mooted a year ago. This collaborative effort between Google and several of the worlds leading research libraries aims to make many thousands of books available to be searched and read online free of charge. Although the books included so far are not covered by copyright, the plan has attracted the ire of publishers. Five large book firms are suing Google for violating copyright on material that it has scanned and, although out of print, is still protected by law. Google has said that it will only publish short extracts from material under copyright unless given express permission to publish more, but publishers are unconvinced. Ironically, many publishers are collaborating with Google on a separate venture, Google Print Publisher, which aims to give readers an online taste of books that are commercially available. The searchable collection of extracts and book information is intended to tempt readers to buy the complete books online or in print form. Not to be outdone, Amazon, the worlds largest online retailer, has unveiled plans for its own foray into the mass e-book market. The firm, which began ten years ago as an online book retailer, now sells a vast array of goods. No doubt piqued that Google, a relative newcomer, should impinge upon its central territory, Amazon revealed on Thursday that it would introduce two new services. Amazon Pages will allow customers to search for key terms in selected books and then buy and read online whatever part they wish, from individual pages to chapters or complete works. Amazon Upgrade will give customers online access to books they have already purchased as hard copies. Customers are likely to have to pay around five cents a page, with the bulk going to the publisher. Microsoft, too, has joined the online-book bandwagon. At the end of October, the software giant said it would spend around $200 million to digitise texts, starting with 150,000 that are in the public domain, to avoid legal problems. It will do so in collaboration with the Open Content Alliance, a consortium of libraries and universities. (Yahoo! has pledged to make 18,000 books available online in conjunction with the same organisation. ) On Thursday, coincidentally the same day as Google and Amazon announced their initiatives, Microsoft released details of a deal with the British Library, the countrys main reference library, to digitise some 25 million pages; these will be made available through MSN Book Search, which will be launched next year. These companies are hoping for a return to the levels of interest in e-books seen when Stephen King, a best-selling horror writer, published Riding the Bullet exclusively on the Internet in 2000. Half a million copies were downloaded in the first 48 hours after publication. This proved to be a high-water mark rather than a taste of things to come. While buyers were reluctant to sit in front of a computer screen to read the latest novels, dedicated e-book reading gadgets failed to catch on. Barnes and Noble, a leading American bookshop chain, began selling e-books with fanfare in 2000 but quietly pulled the plug in 2003 as interest faded. The market for e-books is growing again, though from a tiny base. According to the International Digital Publishing Forum, which collates figures from many of the worlds top publishers, in the third quarter of 2004, worldwide sales were 25% higher than the year before. Unfortunately, this only amounted to a paltry $3.2 million split between 23 publishers in an industry that made sales worth over $100 billion that year. Both retailers and publishers reckon they will eventually be able to persuade consumers to do a lot more of their reading on the web. Some even hope they can become to online books what Apples iTunes is to online music. There are crucial differences between downloading fiction and downloading funk. Online music was driven from the bottom up: illegal filesharing services became wildly popular, and legal firms later took over when the pirates were forced (by a wave of lawsuits) to retreat; the legal providers are confident that more and more consumers will pay small sums for music rather than remain beyond the law. The iPod music player and its like have proved a fashionable and popular new way to listen to songs. The book world has no equivalent. So the commercial prospects for sellers of online books do not yet look very bright. They may get a lift from some novel innovations. The ability to download mere parts of books could help, for instance: sections of manuals, textbooks or cookery books may tempt some customers; students may wish to download the relevant sections of course books; or readers may want a taste of a book that they subsequently buy in hard copy. The ability to download reading matter onto increasingly ubiquitous hand-held electronic devices and 3G phones may further encourage uptake. In Japan, the value of e-books (mainly manga comic books) delivered to mobile phones has jumped, though it will be worth only around 6 billion ($51 million) in 2005, according to estimates
Amazon began by selling books online.
entailment
id_68
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
Most people are too shy to try different things.
neutral
id_69
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
If you think in an iconoclastic way, you can easily overcome fear.
contradiction
id_70
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
When concern about embarrassment matters less, other fears become irrelevant.
neutral
id_71
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
Fear of public speaking is a psychological illness.
contradiction
id_72
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
Exposure to different events forces the brain to think differently.
entailment
id_73
A Neuroscientist Reveals How To Think Differently In the last decade a revolution has occurred in the way that scientists think about the brain. We now know that the decisions humans make can be traced to the firing patterns of neurons in specific parts of the brain. These discoveries have led to the field known as neuroeconomics, which studies the brains secrets to success in an economic environment that demands innovation and being able to do things differently from competitors. A brain that can do this is an iconoclastic one. Briefly, an iconoclast is a person who does something that others say cant be done. This definition implies that iconoclasts are different from other people, but more precisely, it is their brains that are different in three distinct ways: perception, fear response, and social intelligence. Each of these three functions utilizes a different circuit in the brain. Naysayers might suggest that the brain is irrelevant, that thinking in an original, even revolutionary, way is more a matter of personality than brain function. But the field of neuroeconomics was born out of the realization that the physical workings of the brain place limitations on the way we make decisions. By understanding these constraints, we begin to understand why some people march to a different drumbeat. The first thing to realize is that the brain suffers from limited resources. It has a fixed energy budget, about the same as a 40 watt light bulb, so it has evolved to work as efficiently as possible. This is where most people are impeded from being an iconoclast. For example, when confronted with information streaming from the eyes, the brain will interpret this information in the quickest way possible. Thus it will draw on both past experience and any other source of information, such as what other people say, to make sense of what it is seeing. This happens all the time. The brain takes shortcuts that work so well we are hardly ever aware of them. We think our perceptions of the world are real, but they are only biological and electrical rumblings. Perception is not simply a product of what your eyes or ears transmit to your brain. More than the physical reality of photons or sound waves, perception is a product of the brain. Perception is central to iconoclasm. Iconoclasts see things differently to other people. Their brains do not fall into efficiency pitfalls as much as the average persons brain. Iconoclasts, either because they were born that way or through learning, have found ways to work around the perceptual shortcuts that plague most people. Perception is not something that is hardwired into the brain. It is a learned process, which is both a curse and an opportunity for change. The brain faces the fundamental problem of interpreting physical stimuli from the senses. Everything the brain sees, hears, or touches has multiple interpretations. The one that is ultimately chosen is simply the brains best theory. In technical terms, these conjectures have their basis in the statistical likelihood of one interpretation over another and are heavily influenced by past experience and, importantly for potential iconoclasts, what other people say. The best way to see things differently to other people is to bombard the brain with things it has never encountered before. Novelty releases the perceptual process from the chains of past experience and forces the brain to make new judgments. Successful iconoclasts have an extraordinary willingness to be exposed to what is fresh and different. Observation of iconoclasts shows that they embrace novelty while most people avoid things that are different. The problem with novelty, however, is that it tends to trigger the brains fear system. Fear is a major impediment to thinking like an iconoclast and stops the average person in his tracks. There are many types of fear, but the two that inhibit iconoclastic thinking and people generally find difficult to deal with are fear of uncertainty and fear of public ridicule. These may seem like trivial phobias. But fear of public speaking, which everyone must do from time to time, afflicts one-third of the population. This makes it too common to be considered a mental disorder. It is simply a common variant of human nature, one which iconoclasts do not let inhibit their reactions. Finally, to be successful iconoclasts, individuals must sell their ideas to other people. This is where social intelligence comes in. Social intelligence is the ability to understand and manage people in a business setting. In the last decade there has been an explosion of knowledge about the social brain and how the brain works when groups coordinate decision making. Neuroscience has revealed which brain circuits are responsible for functions like understanding what other people think, empathy, fairness, and social identity. These brain regions play key roles in whether people convince others of their ideas. Perception is important in social cognition too. The perception of someones enthusiasm, or reputation, can make or break a deal. Understanding how perception becomes intertwined with social decision making shows why successful iconoclasts are so rare. Iconoclasts create new opportunities in every area from artistic expression to technology to business. They supply creativity and innovation not easily accomplished by committees. Rules arent important to them. Iconoclasts face alienation and failure, but can also be a major asset to any organization. It is crucial for success in any field to understand how the iconoclastic mind works.
Iconoclasts are unusually receptive to new experiences.
entailment
id_74
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Treatments in the nineteenth century were ineffective
contradiction
id_75
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Iron is a form of nourishment for malarial merizoites.
entailment
id_76
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
A severe attack of malaria can be similar to a stroke.
contradiction
id_77
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Research into malaria is not considered a priority by the West.
entailment
id_78
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Technological solutions are likely to be more effective than low-tech solutions.
neutral
id_79
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Malaria has been eradicated in the wealthier parts of the world.
entailment
id_80
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Malaria started among the ancient Greeks.
contradiction
id_81
A New Menace from an Old Enemy Malaria is the worlds second most common disease causing over 500 million infections and one million deaths every year. Worryingly it is one of those diseases which is beginning to increase as it develops resistance to treatments. Even in the UK, where malaria has been effectively eradicated, more than 2,000 people are infected as they return from trips abroad and the numbers are rising. It seems as though malaria has been in existence for millions of years and a similar disease may have infected dinosaurs. Malaria-type fevers are recorded among the ancient Greeks by writers such as Herodotus who also records the first prophylactic measures: fishermen sleeping under their own nets. Treatments up until the nineteenth century were as varied as they were ineffective. Live spiders in butter, purging and bleeding, and sleeping with a copy of the Iliad under the patients head are all recorded. The use of the first genuinely effective remedy, an infusion from the bark of the cinchona tree, was recorded in 1636 but it was only in 1820 that quinine, the active ingredient from the cinchona bark was extracted and modern prevention became possible. For a long time the treatment was regarded with suspicion since it was associated with the Jesuits. Oliver Cromwell, the Protestant English leader who executed King Charles I, died of malaria as a result of his doctors refusing to administer a Catholic remedy! Despite the presence of quinine, malaria was still a major cause of illness and death throughout the nineteenth century. Hundreds of thousands were dying in southern Europe even at the beginning of the last century. Malaria was eradicated from Rome only in the 1930s when Mussolini drained the Pontine marshes. Despite the fact that malaria has been around for so long, surprisingly little is known about how to cure or prevent it. Mosquitoes, who are the carriers of the disease, are attracted to heat, moisture, lactic acid and carbon dioxide but how they sort through this cocktail to repeatedly select one individual for attention over another is not understood. It is known that the malaria parasite, or plasmodium falciparum to give it its Latin name, has a life cycle which must pass through the anopheles mosquito and human hosts in order to live. It can only have attained its present form after mankind mastered agriculture and lived in groups for this to happen. With two such different hosts, the life cycle of the parasite is remarkable. There is the sporozoite stage which lives in the mosquito. When a human is bitten by an infected anopheles mosquito the parasite is passed to the human through the mosquitos saliva. As few as six such parasites may be enough to pass on the infection provided the humans immune system fails to kill the parasites before they reach the liver. There they transform into merozoites and multiply hugely to, perhaps, about 60,000 after 10 days and then spread throughout the bloodstream. Within minutes of this occurring, they attack the red blood cells to feed on the iron-rich haemoglobin which is inside. This is when the patient begins to feel ill. Within hours they can eat as much as 125 grams of haemoglobin which causes anaemia, lethargy, vulnerability to infection, and oxygen deficiency to areas such as the brain. Oxygen is carried to all organs by haemoglobin in the blood. The lack of oxygen leads to the cells blocking capillaries in the brain and the effects are very much like that of a stroke with one important difference: the damage is reversible and patients can come out of a malarial coma with no brain damage. Merozoites now change into gametocytes which can be male or female and it is this phase, with random mixing of genes that results, that can lead to malaria developing resistance to treatments. These resistant gametocytes, can be passed back to the mosquito if the patient is bitten, and they turn into zygotes. These zygotes divide and produce sporozoites and the cycle can begin again. The fight against malaria often seems to focus on the work of medical researchers who try to produce solutions such as vaccines. But funding is low because, it is said, malaria is a third world condition and scarcely troubles the rich, industrialised countries. It is true that malaria is, at root, a disease of poverty. The richer countries have managed to eradicate malaria by extending agriculture and so having proper drainage so mosquitoes cannot breed, and by living in solid houses with glass windows so the mosquitoes cannot bite the human host. Campaigns in Hunan province in China, making use of pesticide impregnated netting around beds reduced infection rates from over 1 million per year to around 65,000. But the search for medical cures goes on. Some 15 years ago there were high hopes for DNA based vaccines which worked well in trials on mice. Some still believe that this is where the answer lies and shortly too. Other researchers are not so confident and expect a wait of at least another 15 years before any significant development.
Mosquitoes are discerning in their choice of victims.
entailment
id_82
A Remarkable Beetle. Some of the most remarkable beetles are the dung beetles, which spend almost their whole lives eating and breeding in dung. More than 4,000 species of these remarkable creatures have evolved and adapted to the worlds different climates and the dung of its many animals. Australias native dung beetles are scrub and woodland dwellers, specialising in coarse marsupial droppings and avoiding the soft cattle dung in which bush flies and buffalo flies breed. In the early 1960s George Bornemissza, then a scientist at the Australian Governments premier research organisation, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), suggested that dung beetles should be introduced to Australia to control dung-breeding flies. Between 1968 and 1982, the CSIRO imported insects from about 50 different species of dung beetle, from Asia, Europe and Africa, aiming to match them to different climatic zones in Australia. Of the 26 species that are known to have become successfully integrated into the local environment, only one, an African species released in northern Australia, has reached its natural boundary. Introducing dung beetles into a pasture is a simple process: approximately 1,500 beetles are released, a handful at a time, into fresh cow pats 2 in the cow pasture. The beetles immediately disappear beneath the pats digging and tunnelling and, if they successfully adapt to their new environment, soon become a permanent, self- sustaining part of the local ecology. In time they multiply and within three or four years the benefits to the pasture are obvious. Dung beetles work from the inside of the pat so they are sheltered from predators such as birds and foxes. Most species burrow into the soil and bury dung in tunnels directly underneath the pats, which are hollowed out from within. Some large species originating from France excavate tunnels to a depth of approximately 30 cm below the dung pat. These beetles make sausage-shaped brood chambers along the tunnels. The shallowest tunnels belong to a much smaller Spanish species that buries dung in chambers that hang like fruit from the branches of a pear tree. South African beetles dig narrow tunnels of approximately 20 cm below the surface of the pat. Some surface-dwelling beetles, including a South African species, cut perfectly-shaped balls from the pat, which are rolled away and attached to the bases of plants. For maximum dung burial in spring, summer and autumn, farmers require a variety of species with overlapping periods of activity. In the cooler environments of the state of Victoria, the large French species (2.5 cms long) is matched with smaller (half this size), temperate-climate Spanish species. The former are slow to recover from the winter cold and produce only one or two generations of offspring from late spring until autumn. The latter, which multiply rapidly in early spring, produce two to five generations annually. The South African ball-rolling species, being a subtropical beetle, prefers the climate of northern and coastal New South Wales where it commonly works with the South African tunnelling species. In warmer climates, many species are active for longer periods of the year. Dung beetles were initially introduced in the late 1960s with a view to controlling buffalo flies by removing the dung within a day or two and so preventing flies from breeding. However, other benefits have become evident. Once the beetle larvae have finished pupation, the residue is a first-rate source of fertiliser. The tunnels abandoned by the beetles provide excellent aeration and water channels for root systems. In addition, when the new generation of beetles has left the nest the abandoned burrows are an attractive habitat for soil-enriching earthworms. The digested dung in these burrows is an excellent food supply for the earthworms, which decompose it further to provide essential soil nutrients. If it were not for the dung beetle, chemical fertiliser and dung would be washed by rain into streams and rivers before it could be absorbed into the hard earth, polluting water courses and causing blooms of blue-green algae. Without the beetles to dispose of the dung, cow pats would litter pastures making grass inedible to cattle and depriving the soil of sunlight. Australias 30 million cattle each produce 10-12 cow pats a day. This amounts to 1.7 billion tonnes a year, enough to smother about 110,000 sq km of pasture, half the area of Victoria. Dung beetles have become an integral part of the successful management of dairy farms in Australia over the past few decades. A number of species are available from the CSIRO or through a small number of private breeders, most of whom were entomologists with the CSIROs dung beetle unit who have taken their specialised knowledge of the insect and opened small businesses in direct competition with their former employer.
The dung beetles cause an immediate improvement to the quality of a cow pasture.
contradiction
id_83
A Remarkable Beetle. Some of the most remarkable beetles are the dung beetles, which spend almost their whole lives eating and breeding in dung. More than 4,000 species of these remarkable creatures have evolved and adapted to the worlds different climates and the dung of its many animals. Australias native dung beetles are scrub and woodland dwellers, specialising in coarse marsupial droppings and avoiding the soft cattle dung in which bush flies and buffalo flies breed. In the early 1960s George Bornemissza, then a scientist at the Australian Governments premier research organisation, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), suggested that dung beetles should be introduced to Australia to control dung-breeding flies. Between 1968 and 1982, the CSIRO imported insects from about 50 different species of dung beetle, from Asia, Europe and Africa, aiming to match them to different climatic zones in Australia. Of the 26 species that are known to have become successfully integrated into the local environment, only one, an African species released in northern Australia, has reached its natural boundary. Introducing dung beetles into a pasture is a simple process: approximately 1,500 beetles are released, a handful at a time, into fresh cow pats 2 in the cow pasture. The beetles immediately disappear beneath the pats digging and tunnelling and, if they successfully adapt to their new environment, soon become a permanent, self- sustaining part of the local ecology. In time they multiply and within three or four years the benefits to the pasture are obvious. Dung beetles work from the inside of the pat so they are sheltered from predators such as birds and foxes. Most species burrow into the soil and bury dung in tunnels directly underneath the pats, which are hollowed out from within. Some large species originating from France excavate tunnels to a depth of approximately 30 cm below the dung pat. These beetles make sausage-shaped brood chambers along the tunnels. The shallowest tunnels belong to a much smaller Spanish species that buries dung in chambers that hang like fruit from the branches of a pear tree. South African beetles dig narrow tunnels of approximately 20 cm below the surface of the pat. Some surface-dwelling beetles, including a South African species, cut perfectly-shaped balls from the pat, which are rolled away and attached to the bases of plants. For maximum dung burial in spring, summer and autumn, farmers require a variety of species with overlapping periods of activity. In the cooler environments of the state of Victoria, the large French species (2.5 cms long) is matched with smaller (half this size), temperate-climate Spanish species. The former are slow to recover from the winter cold and produce only one or two generations of offspring from late spring until autumn. The latter, which multiply rapidly in early spring, produce two to five generations annually. The South African ball-rolling species, being a subtropical beetle, prefers the climate of northern and coastal New South Wales where it commonly works with the South African tunnelling species. In warmer climates, many species are active for longer periods of the year. Dung beetles were initially introduced in the late 1960s with a view to controlling buffalo flies by removing the dung within a day or two and so preventing flies from breeding. However, other benefits have become evident. Once the beetle larvae have finished pupation, the residue is a first-rate source of fertiliser. The tunnels abandoned by the beetles provide excellent aeration and water channels for root systems. In addition, when the new generation of beetles has left the nest the abandoned burrows are an attractive habitat for soil-enriching earthworms. The digested dung in these burrows is an excellent food supply for the earthworms, which decompose it further to provide essential soil nutrients. If it were not for the dung beetle, chemical fertiliser and dung would be washed by rain into streams and rivers before it could be absorbed into the hard earth, polluting water courses and causing blooms of blue-green algae. Without the beetles to dispose of the dung, cow pats would litter pastures making grass inedible to cattle and depriving the soil of sunlight. Australias 30 million cattle each produce 10-12 cow pats a day. This amounts to 1.7 billion tonnes a year, enough to smother about 110,000 sq km of pasture, half the area of Victoria. Dung beetles have become an integral part of the successful management of dairy farms in Australia over the past few decades. A number of species are available from the CSIRO or through a small number of private breeders, most of whom were entomologists with the CSIROs dung beetle unit who have taken their specialised knowledge of the insect and opened small businesses in direct competition with their former employer.
At least twenty-six of the introduced species have become established in Australia.
entailment
id_84
A Remarkable Beetle. Some of the most remarkable beetles are the dung beetles, which spend almost their whole lives eating and breeding in dung. More than 4,000 species of these remarkable creatures have evolved and adapted to the worlds different climates and the dung of its many animals. Australias native dung beetles are scrub and woodland dwellers, specialising in coarse marsupial droppings and avoiding the soft cattle dung in which bush flies and buffalo flies breed. In the early 1960s George Bornemissza, then a scientist at the Australian Governments premier research organisation, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), suggested that dung beetles should be introduced to Australia to control dung-breeding flies. Between 1968 and 1982, the CSIRO imported insects from about 50 different species of dung beetle, from Asia, Europe and Africa, aiming to match them to different climatic zones in Australia. Of the 26 species that are known to have become successfully integrated into the local environment, only one, an African species released in northern Australia, has reached its natural boundary. Introducing dung beetles into a pasture is a simple process: approximately 1,500 beetles are released, a handful at a time, into fresh cow pats 2 in the cow pasture. The beetles immediately disappear beneath the pats digging and tunnelling and, if they successfully adapt to their new environment, soon become a permanent, self- sustaining part of the local ecology. In time they multiply and within three or four years the benefits to the pasture are obvious. Dung beetles work from the inside of the pat so they are sheltered from predators such as birds and foxes. Most species burrow into the soil and bury dung in tunnels directly underneath the pats, which are hollowed out from within. Some large species originating from France excavate tunnels to a depth of approximately 30 cm below the dung pat. These beetles make sausage-shaped brood chambers along the tunnels. The shallowest tunnels belong to a much smaller Spanish species that buries dung in chambers that hang like fruit from the branches of a pear tree. South African beetles dig narrow tunnels of approximately 20 cm below the surface of the pat. Some surface-dwelling beetles, including a South African species, cut perfectly-shaped balls from the pat, which are rolled away and attached to the bases of plants. For maximum dung burial in spring, summer and autumn, farmers require a variety of species with overlapping periods of activity. In the cooler environments of the state of Victoria, the large French species (2.5 cms long) is matched with smaller (half this size), temperate-climate Spanish species. The former are slow to recover from the winter cold and produce only one or two generations of offspring from late spring until autumn. The latter, which multiply rapidly in early spring, produce two to five generations annually. The South African ball-rolling species, being a subtropical beetle, prefers the climate of northern and coastal New South Wales where it commonly works with the South African tunnelling species. In warmer climates, many species are active for longer periods of the year. Dung beetles were initially introduced in the late 1960s with a view to controlling buffalo flies by removing the dung within a day or two and so preventing flies from breeding. However, other benefits have become evident. Once the beetle larvae have finished pupation, the residue is a first-rate source of fertiliser. The tunnels abandoned by the beetles provide excellent aeration and water channels for root systems. In addition, when the new generation of beetles has left the nest the abandoned burrows are an attractive habitat for soil-enriching earthworms. The digested dung in these burrows is an excellent food supply for the earthworms, which decompose it further to provide essential soil nutrients. If it were not for the dung beetle, chemical fertiliser and dung would be washed by rain into streams and rivers before it could be absorbed into the hard earth, polluting water courses and causing blooms of blue-green algae. Without the beetles to dispose of the dung, cow pats would litter pastures making grass inedible to cattle and depriving the soil of sunlight. Australias 30 million cattle each produce 10-12 cow pats a day. This amounts to 1.7 billion tonnes a year, enough to smother about 110,000 sq km of pasture, half the area of Victoria. Dung beetles have become an integral part of the successful management of dairy farms in Australia over the past few decades. A number of species are available from the CSIRO or through a small number of private breeders, most of whom were entomologists with the CSIROs dung beetle unit who have taken their specialised knowledge of the insect and opened small businesses in direct competition with their former employer.
Dung beetles were brought to Australia by the CSIRO over a fourteen-year period.
entailment
id_85
A Remarkable Beetle. Some of the most remarkable beetles are the dung beetles, which spend almost their whole lives eating and breeding in dung. More than 4,000 species of these remarkable creatures have evolved and adapted to the worlds different climates and the dung of its many animals. Australias native dung beetles are scrub and woodland dwellers, specialising in coarse marsupial droppings and avoiding the soft cattle dung in which bush flies and buffalo flies breed. In the early 1960s George Bornemissza, then a scientist at the Australian Governments premier research organisation, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), suggested that dung beetles should be introduced to Australia to control dung-breeding flies. Between 1968 and 1982, the CSIRO imported insects from about 50 different species of dung beetle, from Asia, Europe and Africa, aiming to match them to different climatic zones in Australia. Of the 26 species that are known to have become successfully integrated into the local environment, only one, an African species released in northern Australia, has reached its natural boundary. Introducing dung beetles into a pasture is a simple process: approximately 1,500 beetles are released, a handful at a time, into fresh cow pats 2 in the cow pasture. The beetles immediately disappear beneath the pats digging and tunnelling and, if they successfully adapt to their new environment, soon become a permanent, self- sustaining part of the local ecology. In time they multiply and within three or four years the benefits to the pasture are obvious. Dung beetles work from the inside of the pat so they are sheltered from predators such as birds and foxes. Most species burrow into the soil and bury dung in tunnels directly underneath the pats, which are hollowed out from within. Some large species originating from France excavate tunnels to a depth of approximately 30 cm below the dung pat. These beetles make sausage-shaped brood chambers along the tunnels. The shallowest tunnels belong to a much smaller Spanish species that buries dung in chambers that hang like fruit from the branches of a pear tree. South African beetles dig narrow tunnels of approximately 20 cm below the surface of the pat. Some surface-dwelling beetles, including a South African species, cut perfectly-shaped balls from the pat, which are rolled away and attached to the bases of plants. For maximum dung burial in spring, summer and autumn, farmers require a variety of species with overlapping periods of activity. In the cooler environments of the state of Victoria, the large French species (2.5 cms long) is matched with smaller (half this size), temperate-climate Spanish species. The former are slow to recover from the winter cold and produce only one or two generations of offspring from late spring until autumn. The latter, which multiply rapidly in early spring, produce two to five generations annually. The South African ball-rolling species, being a subtropical beetle, prefers the climate of northern and coastal New South Wales where it commonly works with the South African tunnelling species. In warmer climates, many species are active for longer periods of the year. Dung beetles were initially introduced in the late 1960s with a view to controlling buffalo flies by removing the dung within a day or two and so preventing flies from breeding. However, other benefits have become evident. Once the beetle larvae have finished pupation, the residue is a first-rate source of fertiliser. The tunnels abandoned by the beetles provide excellent aeration and water channels for root systems. In addition, when the new generation of beetles has left the nest the abandoned burrows are an attractive habitat for soil-enriching earthworms. The digested dung in these burrows is an excellent food supply for the earthworms, which decompose it further to provide essential soil nutrients. If it were not for the dung beetle, chemical fertiliser and dung would be washed by rain into streams and rivers before it could be absorbed into the hard earth, polluting water courses and causing blooms of blue-green algae. Without the beetles to dispose of the dung, cow pats would litter pastures making grass inedible to cattle and depriving the soil of sunlight. Australias 30 million cattle each produce 10-12 cow pats a day. This amounts to 1.7 billion tonnes a year, enough to smother about 110,000 sq km of pasture, half the area of Victoria. Dung beetles have become an integral part of the successful management of dairy farms in Australia over the past few decades. A number of species are available from the CSIRO or through a small number of private breeders, most of whom were entomologists with the CSIROs dung beetle unit who have taken their specialised knowledge of the insect and opened small businesses in direct competition with their former employer.
Four thousand species of dung beetle were initially brought to Australia by the CSIRO.
contradiction
id_86
A Remarkable Beetle. Some of the most remarkable beetles are the dung beetles, which spend almost their whole lives eating and breeding in dung. More than 4,000 species of these remarkable creatures have evolved and adapted to the worlds different climates and the dung of its many animals. Australias native dung beetles are scrub and woodland dwellers, specialising in coarse marsupial droppings and avoiding the soft cattle dung in which bush flies and buffalo flies breed. In the early 1960s George Bornemissza, then a scientist at the Australian Governments premier research organisation, the Commonwealth Scientific and Industrial Research Organisation (CSIRO), suggested that dung beetles should be introduced to Australia to control dung-breeding flies. Between 1968 and 1982, the CSIRO imported insects from about 50 different species of dung beetle, from Asia, Europe and Africa, aiming to match them to different climatic zones in Australia. Of the 26 species that are known to have become successfully integrated into the local environment, only one, an African species released in northern Australia, has reached its natural boundary. Introducing dung beetles into a pasture is a simple process: approximately 1,500 beetles are released, a handful at a time, into fresh cow pats 2 in the cow pasture. The beetles immediately disappear beneath the pats digging and tunnelling and, if they successfully adapt to their new environment, soon become a permanent, self- sustaining part of the local ecology. In time they multiply and within three or four years the benefits to the pasture are obvious. Dung beetles work from the inside of the pat so they are sheltered from predators such as birds and foxes. Most species burrow into the soil and bury dung in tunnels directly underneath the pats, which are hollowed out from within. Some large species originating from France excavate tunnels to a depth of approximately 30 cm below the dung pat. These beetles make sausage-shaped brood chambers along the tunnels. The shallowest tunnels belong to a much smaller Spanish species that buries dung in chambers that hang like fruit from the branches of a pear tree. South African beetles dig narrow tunnels of approximately 20 cm below the surface of the pat. Some surface-dwelling beetles, including a South African species, cut perfectly-shaped balls from the pat, which are rolled away and attached to the bases of plants. For maximum dung burial in spring, summer and autumn, farmers require a variety of species with overlapping periods of activity. In the cooler environments of the state of Victoria, the large French species (2.5 cms long) is matched with smaller (half this size), temperate-climate Spanish species. The former are slow to recover from the winter cold and produce only one or two generations of offspring from late spring until autumn. The latter, which multiply rapidly in early spring, produce two to five generations annually. The South African ball-rolling species, being a subtropical beetle, prefers the climate of northern and coastal New South Wales where it commonly works with the South African tunnelling species. In warmer climates, many species are active for longer periods of the year. Dung beetles were initially introduced in the late 1960s with a view to controlling buffalo flies by removing the dung within a day or two and so preventing flies from breeding. However, other benefits have become evident. Once the beetle larvae have finished pupation, the residue is a first-rate source of fertiliser. The tunnels abandoned by the beetles provide excellent aeration and water channels for root systems. In addition, when the new generation of beetles has left the nest the abandoned burrows are an attractive habitat for soil-enriching earthworms. The digested dung in these burrows is an excellent food supply for the earthworms, which decompose it further to provide essential soil nutrients. If it were not for the dung beetle, chemical fertiliser and dung would be washed by rain into streams and rivers before it could be absorbed into the hard earth, polluting water courses and causing blooms of blue-green algae. Without the beetles to dispose of the dung, cow pats would litter pastures making grass inedible to cattle and depriving the soil of sunlight. Australias 30 million cattle each produce 10-12 cow pats a day. This amounts to 1.7 billion tonnes a year, enough to smother about 110,000 sq km of pasture, half the area of Victoria. Dung beetles have become an integral part of the successful management of dairy farms in Australia over the past few decades. A number of species are available from the CSIRO or through a small number of private breeders, most of whom were entomologists with the CSIROs dung beetle unit who have taken their specialised knowledge of the insect and opened small businesses in direct competition with their former employer.
Bush flies are easier to control than buffalo flies.
neutral
id_87
A SILENT FORCE There is a legend that St Augustine in the fourth century AD was the first individual to be seen reading silently rather than aloud, or semi-aloud, as had been the practice hitherto. Reading has come a long way since Augustines day. There was a time when it was a menial job of scribes and priests, not the mark of civilization it became in Europe during the Renaissance when it was seen as one of the attributes of the civilized individual. Modern nations are now seriously affected by their levels of literacy. While the Western world has seen a noticeable decline in these areas, other less developed countries have advanced and, in some cases, overtaken the West. India, for example, now has a large pool of educated workers. So European countries can no longer rest on their laurels as they have done for far too long; otherwise, they are in danger of falling even further behind economically. It is difficult in the modern world to do anything other than a basic job without being able to read. Reading as a skill is the key to an educated workforce, which in turn is the bedrock of economic advancement, particularly in the present technological age. Studies have shown that by increasing the literacy and numeracy skills of primary school children in the UK, the benefit to the economy generally is in billions of pounds. The skill of reading is now no more just an intellectual or leisure activity, but rather a fully-fledged economic force. Part of the problem with reading is that it is a skill which is not appreciated in most developed societies. This is an attitude that has condemned large swathes of the population in most Western nations to illiteracy. It might surprise people in countries outside the West to learn that in the United Kingdom, and indeed in some other European countries, the literacy rate has fallen to below that of so-called less developed countries. There are also forces conspiring against reading in our modern society. It is not seen as cool among a younger generation more at home with computer screens or a Walkman. The solitude of reading is not very appealing. Students at school, college or university who read a lot are called bookworms. The term indicates the contempt in which reading and learning are held in certain circles or subcultures. It is a criticism, like all such attacks, driven by the insecurity of those who are not literate or are semi-literate. Criticism is also a means, like all bullying, of keeping peers in place so that they do not step out of line. Peer pressure among young people is so powerful that it often kills any attempts to change attitudes to habits like reading. But the negative connotations apart, is modern Western society standing Canute-like against an uncontrollable spiral of decline? I think not. How should people be encouraged to read more? It can easily be done by increasing basic reading skills at an early age and encouraging young people to borrow books from schools. Some schools have classroom libraries as well as school libraries. It is no good waiting until pupils are in their secondary school to encourage an interest in books; it needs to be pushed at an early age. Reading comics, magazines and low brow publications like Mills and Boon is frowned upon. But surely what people, whether they be adults or children, read is of little import. What is significant is the fact that they are reading. Someone who reads a comic today may have the courage to pick up a more substantial tome later on. But perhaps the best idea would be to stop the negative attitudes to reading from forming in the first place. Taking children to local libraries brings them into contact with an environment where they can become relaxed among books. If primary school children were also taken in groups into bookshops, this might also entice them to want their own books. A local bookshop, like some local libraries, could perhaps arrange book readings for children which, being away from the classroom, would make the reading activity more of an adventure. On a more general note, most countries have writers of national importance. By increasing the standing of national writers in the eyes of the public, through local and national writing competitions, people would be drawn more to the printed word. Catch them young and, perhaps, they just might then all become bookworms.
The literacy rate in less developed nations is considerably higher than in all European countries.
neutral
id_88
A SILENT FORCE There is a legend that St Augustine in the fourth century AD was the first individual to be seen reading silently rather than aloud, or semi-aloud, as had been the practice hitherto. Reading has come a long way since Augustines day. There was a time when it was a menial job of scribes and priests, not the mark of civilization it became in Europe during the Renaissance when it was seen as one of the attributes of the civilized individual. Modern nations are now seriously affected by their levels of literacy. While the Western world has seen a noticeable decline in these areas, other less developed countries have advanced and, in some cases, overtaken the West. India, for example, now has a large pool of educated workers. So European countries can no longer rest on their laurels as they have done for far too long; otherwise, they are in danger of falling even further behind economically. It is difficult in the modern world to do anything other than a basic job without being able to read. Reading as a skill is the key to an educated workforce, which in turn is the bedrock of economic advancement, particularly in the present technological age. Studies have shown that by increasing the literacy and numeracy skills of primary school children in the UK, the benefit to the economy generally is in billions of pounds. The skill of reading is now no more just an intellectual or leisure activity, but rather a fully-fledged economic force. Part of the problem with reading is that it is a skill which is not appreciated in most developed societies. This is an attitude that has condemned large swathes of the population in most Western nations to illiteracy. It might surprise people in countries outside the West to learn that in the United Kingdom, and indeed in some other European countries, the literacy rate has fallen to below that of so-called less developed countries. There are also forces conspiring against reading in our modern society. It is not seen as cool among a younger generation more at home with computer screens or a Walkman. The solitude of reading is not very appealing. Students at school, college or university who read a lot are called bookworms. The term indicates the contempt in which reading and learning are held in certain circles or subcultures. It is a criticism, like all such attacks, driven by the insecurity of those who are not literate or are semi-literate. Criticism is also a means, like all bullying, of keeping peers in place so that they do not step out of line. Peer pressure among young people is so powerful that it often kills any attempts to change attitudes to habits like reading. But the negative connotations apart, is modern Western society standing Canute-like against an uncontrollable spiral of decline? I think not. How should people be encouraged to read more? It can easily be done by increasing basic reading skills at an early age and encouraging young people to borrow books from schools. Some schools have classroom libraries as well as school libraries. It is no good waiting until pupils are in their secondary school to encourage an interest in books; it needs to be pushed at an early age. Reading comics, magazines and low brow publications like Mills and Boon is frowned upon. But surely what people, whether they be adults or children, read is of little import. What is significant is the fact that they are reading. Someone who reads a comic today may have the courage to pick up a more substantial tome later on. But perhaps the best idea would be to stop the negative attitudes to reading from forming in the first place. Taking children to local libraries brings them into contact with an environment where they can become relaxed among books. If primary school children were also taken in groups into bookshops, this might also entice them to want their own books. A local bookshop, like some local libraries, could perhaps arrange book readings for children which, being away from the classroom, would make the reading activity more of an adventure. On a more general note, most countries have writers of national importance. By increasing the standing of national writers in the eyes of the public, through local and national writing competitions, people would be drawn more to the printed word. Catch them young and, perhaps, they just might then all become bookworms.
If you encourage children to read when they are young the negative attitude to reading that grows in some subcultures will be eliminated.
entailment
id_89
A SILENT FORCE There is a legend that St Augustine in the fourth century AD was the first individual to be seen reading silently rather than aloud, or semi-aloud, as had been the practice hitherto. Reading has come a long way since Augustines day. There was a time when it was a menial job of scribes and priests, not the mark of civilization it became in Europe during the Renaissance when it was seen as one of the attributes of the civilized individual. Modern nations are now seriously affected by their levels of literacy. While the Western world has seen a noticeable decline in these areas, other less developed countries have advanced and, in some cases, overtaken the West. India, for example, now has a large pool of educated workers. So European countries can no longer rest on their laurels as they have done for far too long; otherwise, they are in danger of falling even further behind economically. It is difficult in the modern world to do anything other than a basic job without being able to read. Reading as a skill is the key to an educated workforce, which in turn is the bedrock of economic advancement, particularly in the present technological age. Studies have shown that by increasing the literacy and numeracy skills of primary school children in the UK, the benefit to the economy generally is in billions of pounds. The skill of reading is now no more just an intellectual or leisure activity, but rather a fully-fledged economic force. Part of the problem with reading is that it is a skill which is not appreciated in most developed societies. This is an attitude that has condemned large swathes of the population in most Western nations to illiteracy. It might surprise people in countries outside the West to learn that in the United Kingdom, and indeed in some other European countries, the literacy rate has fallen to below that of so-called less developed countries. There are also forces conspiring against reading in our modern society. It is not seen as cool among a younger generation more at home with computer screens or a Walkman. The solitude of reading is not very appealing. Students at school, college or university who read a lot are called bookworms. The term indicates the contempt in which reading and learning are held in certain circles or subcultures. It is a criticism, like all such attacks, driven by the insecurity of those who are not literate or are semi-literate. Criticism is also a means, like all bullying, of keeping peers in place so that they do not step out of line. Peer pressure among young people is so powerful that it often kills any attempts to change attitudes to habits like reading. But the negative connotations apart, is modern Western society standing Canute-like against an uncontrollable spiral of decline? I think not. How should people be encouraged to read more? It can easily be done by increasing basic reading skills at an early age and encouraging young people to borrow books from schools. Some schools have classroom libraries as well as school libraries. It is no good waiting until pupils are in their secondary school to encourage an interest in books; it needs to be pushed at an early age. Reading comics, magazines and low brow publications like Mills and Boon is frowned upon. But surely what people, whether they be adults or children, read is of little import. What is significant is the fact that they are reading. Someone who reads a comic today may have the courage to pick up a more substantial tome later on. But perhaps the best idea would be to stop the negative attitudes to reading from forming in the first place. Taking children to local libraries brings them into contact with an environment where they can become relaxed among books. If primary school children were also taken in groups into bookshops, this might also entice them to want their own books. A local bookshop, like some local libraries, could perhaps arrange book readings for children which, being away from the classroom, would make the reading activity more of an adventure. On a more general note, most countries have writers of national importance. By increasing the standing of national writers in the eyes of the public, through local and national writing competitions, people would be drawn more to the printed word. Catch them young and, perhaps, they just might then all become bookworms.
People should be discouraged from reading comics and magazines.
contradiction
id_90
A SILENT FORCE There is a legend that St Augustine in the fourth century AD was the first individual to be seen reading silently rather than aloud, or semi-aloud, as had been the practice hitherto. Reading has come a long way since Augustines day. There was a time when it was a menial job of scribes and priests, not the mark of civilization it became in Europe during the Renaissance when it was seen as one of the attributes of the civilized individual. Modern nations are now seriously affected by their levels of literacy. While the Western world has seen a noticeable decline in these areas, other less developed countries have advanced and, in some cases, overtaken the West. India, for example, now has a large pool of educated workers. So European countries can no longer rest on their laurels as they have done for far too long; otherwise, they are in danger of falling even further behind economically. It is difficult in the modern world to do anything other than a basic job without being able to read. Reading as a skill is the key to an educated workforce, which in turn is the bedrock of economic advancement, particularly in the present technological age. Studies have shown that by increasing the literacy and numeracy skills of primary school children in the UK, the benefit to the economy generally is in billions of pounds. The skill of reading is now no more just an intellectual or leisure activity, but rather a fully-fledged economic force. Part of the problem with reading is that it is a skill which is not appreciated in most developed societies. This is an attitude that has condemned large swathes of the population in most Western nations to illiteracy. It might surprise people in countries outside the West to learn that in the United Kingdom, and indeed in some other European countries, the literacy rate has fallen to below that of so-called less developed countries. There are also forces conspiring against reading in our modern society. It is not seen as cool among a younger generation more at home with computer screens or a Walkman. The solitude of reading is not very appealing. Students at school, college or university who read a lot are called bookworms. The term indicates the contempt in which reading and learning are held in certain circles or subcultures. It is a criticism, like all such attacks, driven by the insecurity of those who are not literate or are semi-literate. Criticism is also a means, like all bullying, of keeping peers in place so that they do not step out of line. Peer pressure among young people is so powerful that it often kills any attempts to change attitudes to habits like reading. But the negative connotations apart, is modern Western society standing Canute-like against an uncontrollable spiral of decline? I think not. How should people be encouraged to read more? It can easily be done by increasing basic reading skills at an early age and encouraging young people to borrow books from schools. Some schools have classroom libraries as well as school libraries. It is no good waiting until pupils are in their secondary school to encourage an interest in books; it needs to be pushed at an early age. Reading comics, magazines and low brow publications like Mills and Boon is frowned upon. But surely what people, whether they be adults or children, read is of little import. What is significant is the fact that they are reading. Someone who reads a comic today may have the courage to pick up a more substantial tome later on. But perhaps the best idea would be to stop the negative attitudes to reading from forming in the first place. Taking children to local libraries brings them into contact with an environment where they can become relaxed among books. If primary school children were also taken in groups into bookshops, this might also entice them to want their own books. A local bookshop, like some local libraries, could perhaps arrange book readings for children which, being away from the classroom, would make the reading activity more of an adventure. On a more general note, most countries have writers of national importance. By increasing the standing of national writers in the eyes of the public, through local and national writing competitions, people would be drawn more to the printed word. Catch them young and, perhaps, they just might then all become bookworms.
European countries have been satisfied with past achievements for too long and have allowed other countries to overtake them in certain areas.
entailment
id_91
A SILENT FORCE There is a legend that St Augustine in the fourth century AD was the first individual to be seen reading silently rather than aloud, or semi-aloud, as had been the practice hitherto. Reading has come a long way since Augustines day. There was a time when it was a menial job of scribes and priests, not the mark of civilization it became in Europe during the Renaissance when it was seen as one of the attributes of the civilized individual. Modern nations are now seriously affected by their levels of literacy. While the Western world has seen a noticeable decline in these areas, other less developed countries have advanced and, in some cases, overtaken the West. India, for example, now has a large pool of educated workers. So European countries can no longer rest on their laurels as they have done for far too long; otherwise, they are in danger of falling even further behind economically. It is difficult in the modern world to do anything other than a basic job without being able to read. Reading as a skill is the key to an educated workforce, which in turn is the bedrock of economic advancement, particularly in the present technological age. Studies have shown that by increasing the literacy and numeracy skills of primary school children in the UK, the benefit to the economy generally is in billions of pounds. The skill of reading is now no more just an intellectual or leisure activity, but rather a fully-fledged economic force. Part of the problem with reading is that it is a skill which is not appreciated in most developed societies. This is an attitude that has condemned large swathes of the population in most Western nations to illiteracy. It might surprise people in countries outside the West to learn that in the United Kingdom, and indeed in some other European countries, the literacy rate has fallen to below that of so-called less developed countries. There are also forces conspiring against reading in our modern society. It is not seen as cool among a younger generation more at home with computer screens or a Walkman. The solitude of reading is not very appealing. Students at school, college or university who read a lot are called bookworms. The term indicates the contempt in which reading and learning are held in certain circles or subcultures. It is a criticism, like all such attacks, driven by the insecurity of those who are not literate or are semi-literate. Criticism is also a means, like all bullying, of keeping peers in place so that they do not step out of line. Peer pressure among young people is so powerful that it often kills any attempts to change attitudes to habits like reading. But the negative connotations apart, is modern Western society standing Canute-like against an uncontrollable spiral of decline? I think not. How should people be encouraged to read more? It can easily be done by increasing basic reading skills at an early age and encouraging young people to borrow books from schools. Some schools have classroom libraries as well as school libraries. It is no good waiting until pupils are in their secondary school to encourage an interest in books; it needs to be pushed at an early age. Reading comics, magazines and low brow publications like Mills and Boon is frowned upon. But surely what people, whether they be adults or children, read is of little import. What is significant is the fact that they are reading. Someone who reads a comic today may have the courage to pick up a more substantial tome later on. But perhaps the best idea would be to stop the negative attitudes to reading from forming in the first place. Taking children to local libraries brings them into contact with an environment where they can become relaxed among books. If primary school children were also taken in groups into bookshops, this might also entice them to want their own books. A local bookshop, like some local libraries, could perhaps arrange book readings for children which, being away from the classroom, would make the reading activity more of an adventure. On a more general note, most countries have writers of national importance. By increasing the standing of national writers in the eyes of the public, through local and national writing competitions, people would be drawn more to the printed word. Catch them young and, perhaps, they just might then all become bookworms.
Reading is an economic force.
entailment
id_92
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
Shopping lends itself to analysis based on anthropological relativism.
entailment
id_93
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
Generalisations about shopping are possible.
entailment
id_94
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
Tire conclusions drawn from this study will confirm some of the findings of other research.
contradiction
id_95
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
Shopping should be regarded as a basically unselfish activity.
entailment
id_96
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
People sometimes analyse their own motives when they are shopping.
neutral
id_97
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
The actual goods bought are the primary concern in the activity of shopping.
contradiction
id_98
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
It was possible to predict the outcome of the study before embarking on it.
contradiction
id_99
A Theory of Shopping For a one-year period I attempted to conduct an ethnography of shopping on and around a street in North London. This was carried out in association with Alison Clarke. I say attempted because, given the absence of community and the intensely private nature of London households, this could not be an ethnography in the conventional sense. Nevertheless, through conversation, being present in the home and accompanying householders during their shopping, I tried to reach an understanding of the nature of shopping through greater or lesser exposure to 76 households. My part of the ethnography concentrated upon shopping itself. Alison Clarke has since been working with the same households, but focusing upon other forms of provisioning such as the use of catalogues (see Clarke 1997). We generally first met these households together, but most of the material that is used within this particular essay derived from my own subsequent fieldwork. Following the completion of this essay, and a study of some related shopping centres, we hope to write a more general ethnography of provisioning. This will also examine other issues, such as the nature of community and the implications for retail and for the wider political economy. None of this, however, forms part of the present essay, which is primarily concerned with establishing the cosmological foundations of shopping. To state that a household has been included within the study is to gloss over a wide diversity of degrees of involvement. The minimum requirement is simply that a householder has agreed to be interviewed about their shopping, which would include the local shopping parade, shopping centres and supermarkets. At the other extreme are families that we have come to know well during the course of the year. Interaction would include formal interviews, and a less formal presence within their homes, usually with a cup of tea. It also meant accompanying them on one or several events, which might comprise shopping trips or participation in activities associated with the area of Clarkes study, such as the meeting of a group supplying products for the home. In analysing and writing up the experience of an ethnography of shopping in North London, I am led in two opposed directions. The tradition of anthropological relativism leads to an emphasis upon difference, and there are many ways in which shopping can help us elucidate differences. For example, there are differences in the experience of shopping based on gender, age, ethnicity and class. There are also differences based on the various genres of shopping experience, from a mall to a corner shop. By contrast, there is the tradition of anthropological generalisation about peoples and comparative theory. This leads to the question as to whether there are any fundamental aspects of shopping which suggest a robust normativity that comes through the research and is not entirely dissipated by relativism. In this essay I want to emphasize the latter approach and argue that if not all, then most acts of shopping on this street exhibit a normative form which needs to be addressed. In the later discussion of the discourse of shopping I will defend the possibility that such a heterogenous group of households could be fairly represented by a series of homogenous cultural practices. The theory that I will propose is certainly at odds with most of the literature on this topic. My premise, unlike that of most studies of consumption, whether they arise from economists, business studies or cultural studies, is that for most households in this street the act of shopping was hardly ever directed towards the person who was doing the shopping. Shopping is therefore not best understood as an individualistic or individualising act related to the subjectivity of the shopper. Rather, the act of buying goods is mainly directed at two forms of otherness. The first of these expresses a relationship between the shopper and a particular other individual such as a child or partner, either present in the household, desired or imagined. The second of these is a relationship to a more general goal which transcends any immediate utility and is best understood as cosmological in that it takes the form of neither subject nor object but of the values to which people wish to dedicate themselves. It never occurred to me at any stage when carrying out the ethnography that I should consider the topic of sacrifice as relevant to this research. In no sense then could the ethnography be regarded as a testing of the ideas presented here. The Literature that seemed most relevant in the initial analysis of the London material was that on thrift discussed in chapter 3. The crucial element in opening up the potential of sacrifice for understanding shopping came through reading Bataiile. Bataille, however, was merely the catalyst, since I will argue that it is the classic works on sacrifice and, in particular, the foundation to its modern study by Hubert and Mauss (1964) that has become the primary grounds for my interpretation. It is important, however, when reading the following account to note that when I use the word sacrifice, I only rarely refer to the colloquial sense of the term as used in the concept of the self-sacrificial housewife. Mostly the allusion is to this Literature on ancient sacrifice and the detailed analysis of the complex ritual sequence involved in traditional sacrifice. The metaphorical use of the term may have its place within the subsequent discussion but this is secondary to an argument at the level of structure.
Anthropological relativism is more widely applied than anthropological generalisation.
neutral